Volley 是很是火的一個網絡請求框架,一方面它是由谷歌官方在2013年I/O大會推出的,另外一方面你們都說它很優秀。Volley 很是適合去進行數據量不大,但通訊頻繁的網絡操做。Volley 能夠傳輸 String 、Json,還能夠很方便的加載圖片。它的用法很簡單,無非就是獲取一個 RequestQueue
,把請求 request
加入其中。網上的介紹也不少,這裏就很少說了。html
那麼 RequestQueue
是怎麼工做的? 它跟 request
什麼關係? Volley 是怎麼處理Cache
的?這篇文章深刻 Volley 的源碼,瞭解它的工做流程。android
使用 Volley 的時候,須要先得到一個 RequestQueue
對象。它用於添加各類請求任務,一般是調用 Volly.newRequestQueue()
方法獲取一個默認的 RequestQueue
。咱們就從這個方法開始,下面是它的源碼:數組
public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, null); } public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); return queue; }
newRequestQueue(context)
調用了它的重載方法 newRequestQueue(context,null)
。在這個方法中,先是經過 context 得到了緩存目錄而且構建了 userAgent 信息。接着判斷 stack 是否爲空,從上面的調用能夠知道,默認狀況下 stack==null, 因此新建一個 stack 對象。根據系統版本不一樣,在版本號大於 9 時,stack 爲 HurlStack,不然爲 HttpClientStack。它們的區別是,HurlStack 使用 HttpUrlConnection 進行網絡通訊,而 HttpClientStack 使用 HttpClient。有了 stack 後,用它建立了一個 BasicNetWork 對象,能夠猜到它是用來處理網絡請求任務的。緊接着,新建了一個 RequestQueue
,這也是最終返回給咱們的請求隊列。這個 RequestQueue
接受兩個參數,第一個是 DiskBasedCache
對象,從名字就能夠看出這是用於硬盤緩存的,而且緩存目錄就是方法一開始取得的 cacheDir;第二個參數是剛剛建立的 network 對象。最後調用 queue.start()
啓動請求隊列。緩存
在分析 start() 以前,先來了解一下 RequestQueue
一些關鍵的內部變量以及構造方法:網絡
//重複的請求將加入這個集合 private final Map<String, Queue<Request>> mWaitingRequests = new HashMap<String, Queue<Request>>(); //全部正在處理的請求任務的集合 private final Set<Request> mCurrentRequests = new HashSet<Request>(); //緩存任務的隊列 private final PriorityBlockingQueue<Request> mCacheQueue = new PriorityBlockingQueue<Request>(); //網絡請求隊列 private final PriorityBlockingQueue<Request> mNetworkQueue = new PriorityBlockingQueue<Request>(); //默認線程池大小 private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4; //用於響應數據的存儲與獲取 private final Cache mCache; //用於網絡請求 private final Network mNetwork; //用於分發響應數據 private final ResponseDelivery mDelivery; //網絡請求調度 private NetworkDispatcher[] mDispatchers; //緩存調度 private CacheDispatcher mCacheDispatcher; public RequestQueue(Cache cache, Network network) { this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); } public RequestQueue(Cache cache, Network network, int threadPoolSize) { this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; }
RequestQueue
有多個構造方法,最終都會調用最後一個。在這個方法中,mCache
和 mNetWork
分別設置爲 newRequestQueue 中傳來的 DiskBasedCache 和 BasicNetWork。mDispatchers
爲網絡請求調度器的數組,默認大小 4 (DEFAULT_NETWORK_THREAD_POOL_SIZE
)。mDelivery
設置爲 new ExecutorDelivery(new Handler(Looper.getMainLooper())),它用於響應數據的傳遞,後面會具體介紹。能夠看出,其實咱們能夠本身定製一個 RequestQueue
而不必定要用默認的 newRequestQueue。app
下面就來看看 start()
方法是如何啓動請求隊列的:框架
public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
代碼比較簡單,就作了兩件事。第一,建立而且啓動一個 CacheDispatcher
。第二,建立並啓動四個 NetworkDispatcher
。所謂的啓動請求隊列就是把任務交給緩存調度器和網絡請求調度器處理。ide
這裏還有個問題,請求任務是怎麼加入請求隊列的?其實就是調用了 add()
方法。如今看看它內部怎麼處理的:oop
public Request add(Request request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // Process requests in the order they are added. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } // Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
這個方法的代碼稍微有點長,但邏輯並不複雜。首先把這個任務加入 mCurrentRequests
,而後判斷是否須要緩存,不須要的話就直接加入網絡請求任務隊列 mNetworkQueue
而後返回。默認全部任務都須要緩存,能夠調用 setShouldCache(boolean shouldCache)
來更改設置。全部須要緩存的都會加入緩存任務隊列 mCacheQueue
。不過先要判斷 mWaitingRequests
是否是已經有了,避免重複的請求。post
RequestQueue
調用 start() 以後,請求任務就被交給 CacheDispatcher
和 NetworkDispatcher
處理了。它們都繼承自 Thread,其實就是後臺工做線程,分別負責從緩存和網絡獲取數據。
CacheDispatcher
不斷從 mCacheQueue
取出任務處理,下面是它的 run()
方法:
public void run() { if (DEBUG) VolleyLog.v("start new dispatcher"); Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache. mCache.initialize(); while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. 取出緩存隊列的任務 final Request request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it. if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. Cache.Entry entry = mCache.get(request.getCacheKey()); if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. mNetworkQueue.put(request); continue; } // If it is completely expired, just send it to the network. if (entry.isExpired()) { request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } } }
首先是調用 mCache.initialize()
初始化緩存,而後是一個 while(true) 死循環。在循環中,取出緩存隊列的任務。先判斷任務是否取消,若是是就執行 request.finish("cache-discard-canceled")
而後跳過下面的代碼從新開始循環,不然從緩存中找這個任務是否有緩存數據。若是緩存數據不存在,把任務加入網絡請求隊列,而且跳過下面的代碼從新開始循環。若是找到了緩存,就判斷是否過時,過時的仍是要加入網絡請求隊列,不然調用 request
的parseNetworkResponse
解析響應數據。最後一步是判斷緩存數據的新鮮度,不須要刷新新鮮度的直接調用 mDelivery.postResponse(request, response)
傳遞響應數據,不然依然要加入 mNetworkQueue
進行新鮮度驗證。
上面的代碼邏輯其實不是很複雜,但描述起來比較繞,下面這張圖能夠幫助理解:
CacheDispatcher
從緩存中尋找任務的響應數據,若是任務沒有緩存或者緩存失效就要交給 NetworkDispatcher
處理了。它不斷從網絡請求任務隊列中取出任務執行。下面是它的 run()
方法:
public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); Request request; while (true) { try { // Take a request from the queue. request = mQueue.take(); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. if (request.isCanceled()) { request.finish("network-discard-cancelled"); continue; } // Tag the request (if API >= 14) if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) { TrafficStats.setThreadStatsTag(request.getTrafficStatsTag()); } // Perform the network request. 發送網絡請求 NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already, // we're done -- don't deliver a second identical response. if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); continue; } // Parse the response here on the worker thread. Response<?> response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry != null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); mDelivery.postError(request, new VolleyError(e)); } } }
能夠看出,run()
方法裏面依然是個無限循環。從隊列中取出一個任務,而後判斷任務是否取消。若是沒有取消就調用 mNetwork.performRequest(request)
獲取響應數據。若是數據是 304 響應而且已經有這個任務的數據傳遞,說明這是 CacheDispatcher
中驗證新鮮度的請求而且不須要刷新新鮮度,因此跳過下面的代碼從新開始循環。不然繼續下一步,解析響應數據,看看數據是否是要緩存。最後調用 mDelivery.postResponse(request, response)
傳遞響應數據。下面這張圖展現了這個方法的流程:
在 CacheDispatcher
和 NetworkDispatcher
中,得到任務的數據以後都是經過 mDelivery.postResponse(request, response)
傳遞數據。咱們知道 Dispatcher 是另開的線程,因此必須把它們獲取的數據經過某種方法傳遞到主線程,來看看 Deliver 是怎麼作的。mDelivery
的類型爲 ExecutorDelivery
,下面是它的 postResponse
方法源碼:
public void postResponse(Request<?> request, Response<?> response) { postResponse(request, response, null); } public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response"); mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable)); }
從上面的代碼能夠看出,最終是經過調用 mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable))
進行數據傳遞。這裏的 mResponsePoster
是一個 Executor
對象。
private final Executor mResponsePoster; public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); } };
Executor
是線程池框架接口,裏面只有一個 execute()
方法,mResponsePoster
的這個方法實現爲用 handler
傳遞 Runnable 對象。而在 postResponse
方法中,request
和 response
被封裝爲 ResponseDeliveryRunnable
, 它正是一個 Runnable 對象。因此響應數據就是經過 handler
傳遞的,那麼這個 handler
是哪裏來的?其實在介紹 RequestQueue
的時候已經提到了:mDelivery
設置爲 new ExecutorDelivery(new Handler(Looper.getMainLooper())),這個 handler
即是 new Handler(Looper.getMainLooper())
,是與主線程的消息循環鏈接在一塊兒的,這樣數據便成功傳遞到主線程了。
Volley 的基本工做原理就是這樣,用一張圖總結一下它的運行流程:
參考
若是個人文章對您有幫助,不妨點個贊鼓勵一下(^_^)