volley
是一個很是流行的Android
開源框架,本身平時也常用它,但本身對於它的內部的實現過程並無進行太多的深究。因此爲了之後能更通透的使用它,瞭解它的實現是一個很是重要的過程。本身有了一點研究,作個筆記同時與你們一塊兒分享。期間本身也畫了一張圖,但願能更好的幫助咱們理解其中的步驟與原理。以下:html
開始看可能會一臉懵逼,咱們先結合源碼一步一步來,如今讓咱們一塊兒進入Volley
的源碼世界,來解讀大神們的編程思想。android
若是使用過Volley
的都知道,不論是進行網絡請求仍是什麼別的圖片加載,首先都要建立好一個RequestQueue
git
public static final RequestQueue volleyQueue = Volley.newRequestQueue(App.mContext);
而RequestQueue
的建立天然離不開Volley
中的靜態方法newRequestQueue
,從上面的圖片也能知道首先進入點是newRequestQueue
,好了如今咱們來看下該方法中到底作了什麼:github
public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); return queue; }
從上面的源碼中咱們能發現有一個stack
,它表明着網絡請求通訊HttpClient
與HttpUrlConnection
,但咱們通常都是默認設置爲null
。由於它會默認幫咱們進判斷選擇更適合的。當手機版本大於等於9
時會使用HurlStack
它裏面使用HttpUrlConnection
進行實現通訊的,而小於9
時則建立HttpClientStack
,它裏面天然是HttpClient
的實現。經過BasicNetwork
構形成Network
來進行傳遞使用;在這以後構建了RequestQueue
,其中幫咱們構造了一個緩存new DiskBasedCache(cacheDir)
,默認爲5MB
,緩存目錄爲volley
。因此咱們能在data/data/應用包名/volley
下找到緩存。最後調用其start
方法,並返回RequestQueue
。這就是咱們前面第一段代碼的內部實現。下面咱們進入RequestQueue
中的start
方法看下它到底作了什麼呢?編程
在RequestQueue
中經過this
調用自身來默認幫咱們調用了下面的構造函數緩存
public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; }
cache
後續在NetworkDispatcher
中會幫咱們進行response.cacheEntry
的緩存,netWork
是前面的根據版本所封裝的通訊,threadPoolSize
線程池大小默認爲4
,delivery
,是ExecutorDelivery
做用在主線程,在最後對請求響應的分發。網絡
public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
在start
方法中咱們發現它實現了兩個dispatch
,一個是CacheDispatcher
緩存派遣,另外一個是networkDispatcher
進行網絡派遣,其實他們都是Thread
,因此都調用了他們的start
方法。其中networkDispatcher
默認構建了4
個,至關於包含4
個線程的線程池。如今咱們先不去看他們內部的run
方法到底實現了什麼,咱們仍是接着看RequestQueue
中咱們頻繁使用的add
方法。app
public <T> Request<T> add(Request<T> request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // Process requests in the order they are added. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } // Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request<?>>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
咱們結合代碼與前面的圖,首先會爲當前的request
設置RequestQueue
,而且根據狀況同步是不是當前正在進行的請求,加入到mCurrentRequests
中,看11
行代碼,以後對當前request
進行判斷是否須要緩存(默認實現是true
,若是不須要可調用request.setShouldCache()
進行設置),若是不須要則直接加入到前面的mNetworkQueue
中,它會在CacheDispatcher
與NetworkDispatcher
中作相應的處理,而後返回request
。若是須要緩存,看16行代碼,則對mWaitingRequests
中是否包含cacheKey
進行相應的處理。其中cacheKey
爲請求的url
。最後再加入到緩存隊列mCacheQueue
中。框架
細心的人會發現當對應cacheKey
的value
不爲空時,建立了LinkedList
即Queue
,只是將request
加入到了Queue
中,只是更新了mWaitingRequests
中相應的value
但並無加入到mCacheQueue
中。其實否則,由於後續會調用finish
方法,咱們來看下源碼:ide
<T> void finish(Request<T> request) { // Remove from the set of requests currently being processed. synchronized (mCurrentRequests) { mCurrentRequests.remove(request); } synchronized (mFinishedListeners) { for (RequestFinishedListener<T> listener : mFinishedListeners) { listener.onRequestFinished(request); } } if (request.shouldCache()) { synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey); if (waitingRequests != null) { if (VolleyLog.DEBUG) { VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.", waitingRequests.size(), cacheKey); } // Process all queued up requests. They won't be considered as in flight, but // that's not a problem as the cache has been primed by 'request'. mCacheQueue.addAll(waitingRequests); } } } }
看14
、22
行代碼,正如上面我所說,會將Queue
中的request
所有加入到mCacheQueue
中。
好了RequestQueue
的主要源碼差很少就這些,下面咱們進入CacheDispatcher
的源碼分析,看它究竟如何工做的呢?
前面提到了它與NetworkDispatcher
本質都是Thread
,那麼咱們天然是看run
方法
@Override public void run() { if (DEBUG) VolleyLog.v("start new dispatcher"); Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache. mCache.initialize(); while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. final Request<?> request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it. if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. Cache.Entry entry = mCache.get(request.getCacheKey()); if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. mNetworkQueue.put(request); continue; } // If it is completely expired, just send it to the network. if (entry.isExpired()) { request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } } }
看起來不少,咱們結合圖來挑主要的看。首先建立了一個無限循環一直在監視着request
的變化。從緩存隊列mCacheQueue
中獲取request
,若是該請求是cancle
了,調用request.finish()
清除相應數據並進行下一個請求的操做,不然從前面提到的mCache
中獲取Cache.Entry
。若是不存在或者已通過期,將請求加入到網絡隊列中mNetWorkQueue
,進行後續的網絡請求。若是存在(41
行代碼)則進行request.parseNetworkResponse()
解析出response
,不一樣的request
對應不一樣的解析方法。例如StringRequest
與JsonObjectRequest
有各自的解析實現。再看45
行,發現無論entry
是否須要更新的,都會進一步對response
進行mDelivery.postResponse(request, response)
遞送,不一樣的是須要更新的話從新設置request
的entry
與加入到mNetworkQueue
中,也就至關與從新進行網絡請求一遍。那麼再回到遞送的階段,前面已經提到在建立RequestQueue
是實現了ExecutorDelivery
,mDelivery.postResponse
就是其中的方法。咱們來看一下
在這裏建立了一個Executor
,對後面進行遞送,做用在主線程
public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); } }; }
@Override public void postResponse(Request<?> request, Response<?> response) { postResponse(request, response, null); } @Override public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response"); mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable)); }
這個方法就簡單了就是調用execute
進行執行,在進入ResponseDeliveryRunnable
的run
看它如何執行
public void run() { // If this request has canceled, finish it and don't deliver. if (mRequest.isCanceled()) { mRequest.finish("canceled-at-delivery"); return; } // Deliver a normal response or error, depending. if (mResponse.isSuccess()) { mRequest.deliverResponse(mResponse.result); } else { mRequest.deliverError(mResponse.error); } // If this is an intermediate response, add a marker, otherwise we're done // and the request can be finished. if (mResponse.intermediate) { mRequest.addMarker("intermediate-response"); } else { mRequest.finish("done"); } // If we have been provided a post-delivery runnable, run it. if (mRunnable != null) { mRunnable.run(); } }
主要是第9
行代碼,對於不一樣的響應作不一樣的遞送,deliverResponse
與deliverError
內部分別調用的就是咱們很是熟悉的Listener
中的onResponse
與onErrorResponse
方法,進而返回到咱們對網絡請求結果的處理函數。
這就是整個的緩存派遣,簡而言之,存在請求響應的緩存數據就不進行網絡請求,直接調用緩存中的數據進行分發遞送。反之執行網絡請求。
下面我來看下NetworkDispatcher
是如何處理的
@Override public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); while (true) { long startTimeMs = SystemClock.elapsedRealtime(); Request<?> request; try { // Take a request from the queue. request = mQueue.take(); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. if (request.isCanceled()) { request.finish("network-discard-cancelled"); continue; } addTrafficStatsTag(request); // Perform the network request. NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already, // we're done -- don't deliver a second identical response. if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); continue; } // Parse the response here on the worker thread. Response<?> response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry != null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); mDelivery.postError(request, volleyError); } } }
在這裏也建立了一個無限循環的while
,一樣也是先獲取request
,不過是從mQueue
中,即前面屢次出現的mNetWorkQueue
,經過看代碼發現一些實現跟CacheDispatcher
中的相似。也正如圖片中所展現的同樣,(23
行)如何請求取消了,直接finish
;不然進行網絡請求,調用(31
行)mNetwork.performRequest(request)
,這裏的mNetWork
即爲前面RequestQueue
中對不一樣版本進行選擇的stack
的封裝,分別調用HurlStack
與HttpClientStack
各自的performRequest
方法,該方法中構造請求頭與參數分別使用HttpClient
或者HttpUrlConnection
進行網絡請求。咱們再來看42
行,是否是很熟悉,與CacheDispatcher
中的同樣進行response
進行解析,而後若是須要緩存就加入到緩存中,最後(54
行)再調用mDelivery.postResponse(request, response)
進行遞送。至於後面的剩餘的步驟與CacheDispatcher
中的如出一轍,這裏就很少累贅了。
好了,Volley
的源碼解析先就到這裏了,咱們再回過去看那張圖是否是感受很清晰了呢?
咱們來對使用Volley
網絡請求作個總結
newRequestQueue
初始化與構造RequestQueue
RequestQueue
中的add
方法添加request
到請求隊列中CacheDispatcher
,判斷緩存中是否存在,有則解析response
,再直接postResponse
遞送,不然進行後續的網絡請求NetworkDispatcher
中進行相應的request
請求,解析response
如設置了緩存就將結果保存到cache
中,再進行最後的postResponse
遞送。若是有所幫助歡迎關注個人下一次解析
更多分享:我的博客