1、初始化請求隊列並運行html
咱們用Volley時,最早開始的就是初始化請求隊列,一種常見的寫法以下:android
public class MyApplication extends Application { public static RequestQueue requestQueue; @Override public void onCreate() { super.onCreate(); // 沒必要爲每一次HTTP請求都建立一個RequestQueue對象,推薦在application中初始化 requestQueue = Volley.newRequestQueue(this); } }
咱們來看看這個newRequestQueue中到底作了什麼。api
這個流程圖分析了Volley的源碼,個人建議是先本身耐心的從編譯器中看看代碼,慢慢經過F3跳轉,而後在看文章的解釋會比較好,在博客中貼代碼感受不是很適合閱讀。我爲了方便你們閱讀,在源碼中加上了註釋,看完這段代碼再看上面的流程圖就很清晰了。緩存
左邊部分:服務器
package com.android.volley.toolbox;public class Volley { /** Default on-disk cache directory. */ private static final String DEFAULT_CACHE_DIR = "volley"; /** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @param stack An {@link HttpStack} to use for the network, or null for default. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); // 經過應用程序的包名和版本信息等來完善userAgent字符串 String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } // 若是stack爲空,就開始創建一個默認的HttpStack對象 if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { // 若是大於9,那麼就創建一個HurlStack,其內部用了HttpURLConnection stack = new HurlStack(); } else { // 若是在api9以前,HttpUrlConnection是不可靠的,這裏用了HttpClient // 若是你的目標系統是API9以後的,你徹底能夠刪除一些代碼來進軟件瘦身 // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } // 經過stack來創建一個NetWork對象,Network執行網絡請求 Network network = new BasicNetwork(stack); // RequestQueue是分發請求隊列的調度員, // 這裏傳入了一個Cache對象,用來作磁盤緩存,若是你不用能夠進行替換 RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); // 開始請求隊列 return queue; } /** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * 咱們以前就是用這個方法進行初始化requestQueue的,這掉用了上面的初始化方法, * 在stack對象的位置傳入了null,表示由代碼自動產生一個合適的stack對象
* * @param context A {@link Context} to use for creating the cache dir. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, null); } }
說明:在Gingerbread以及之後的版本中,HttpURLConnection是最好的選擇,它簡單的api以及輕量級很是適合Android。壓縮和緩存機制下降了網路使用,提升了速度、節省了電量。新的應用應該選擇使用HttpURLConnection,咱們也將作持續的改進。網絡
如今咱們知道了,這段代碼最終生成了一個RequestQueue,說明它纔是重點啊,下面咱們就來分析下上圖右邊部分的流程。app
右邊部分:ide
/** * Creates the worker pool. Processing will not begin until {@link #start()} is called. * 真正的初始化方法,初始化了network,dispatchers,delivery,cache對象,這裏的cache是磁盤緩存的對象 * @param cache A Cache to use for persisting responses to disk * @param network A Network interface for performing HTTP requests * @param threadPoolSize Number of network dispatcher threads to create * @param delivery A ResponseDelivery interface for posting responses and errors */ public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; } /** * Creates the worker pool. Processing will not begin until {@link #start()} is called. * * @param cache A Cache to use for persisting responses to disk * @param network A Network interface for performing HTTP requests * @param threadPoolSize Number of network dispatcher threads to create */ public RequestQueue(Cache cache, Network network, int threadPoolSize) { this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } /** * Creates the worker pool. Processing will not begin until {@link #start()} is called. * Volley類中調用的構造函數,傳入了默認的網絡線程數,DEFAULT_NETWORK_THREAD_POOL_SIZE = 4(默認是4個線程) * @param cache A Cache to use for persisting responses to disk * @param network A Network interface for performing HTTP requests */ public RequestQueue(Cache cache, Network network) { this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); } /** * Starts the dispatchers in this queue. * 初始化完畢後,會立刻調用start()方法 */ public void start() { // 先確保當前正在運行的發送對象(Dispatcher)被中止 // 創建一個cahce dispatch對象,而且開啓它 stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. // 創建多個訪問網絡的線程對象,而且運行它們 for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
如今咱們就知道Volley類中最後的這兩行到底作了什麼事情了函數
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start();
如今咱們就能理解上面流程圖所示的東西了。oop
2、添加到Request
咱們創建好各類Request對象後,最終都會把它add到RequestQueue中,而後它就能自動進行網絡處理了。這裏的重點就是RequestQueue的add()方法,因此咱們就來看看這是怎樣的一個過程。
關鍵代碼就是這個add()方法,其他的方法很簡單,就不作說明了。
/** * Adds a Request to the dispatch queue. * @param request The request to service * @return The passed-in request */ public <T> Request<T> add(Request<T> request) { // Tag the request as belonging to this queue and add it to the set of current requests. // request對象先給本身內部的mRequestQueue對象進行初始化,存入當前的隊列對象 request.setRequestQueue(this); synchronized (mCurrentRequests) { // mCurrentRequests會存放全部的請求,不管是否是重複的請求都會被放在這裏面,因此一上來就add進來 mCurrentRequests.add(request); } // Process requests in the order they are added. // 設置request自身的序列數 request.setSequence(getSequenceNumber()); // 設置request當前的真正進行的事件,如今是被添加到隊列中了,因此作了以下標記 request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. // 判斷是否容許緩存,這裏是磁盤的緩存, // 若是不容許緩存,直接把request添加到networkQueue中,而後去請求網絡 if (!request.shouldCache()) { mNetworkQueue.add(request); // 添加到mNetworkQueue中,才能進行真正的網絡交互 return request; // 若是不容許緩存,添加到networkQueue中後直接返回 } // Insert request into stage if there's already a request with the same cache key in flight. // 看看如今有沒有緩存的數據,這裏加了個同步鎖 synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); // 判斷等待隊列中是否已經有這個request,若是有了就須要過濾重複請求 if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. // 如今等待隊列中已經有相同的request對象了,直接等待以前的request執行請求完畢, // 而後把結果放入這個request中,這樣就過濾了重複的請求了 // 如今知道了以前已經有一個相同的請求,就初始化一個stagedRequests(請求隊列) Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request<?>>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. // 在等待隊列中添加以cacheKey做爲關鍵字的一組數據,而後放入緩存隊列 mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
最關鍵的就是這個add()方法,其他的方法很簡單,就不作說明了。
3、緩存隊列處理
第一部分講了RequestQueue初始化後,當即執行了其內部的start()方法
/** * Starts the dispatchers in this queue. */ public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
這個方法中首先start了CacheDispatcher,它接收了mCacheQueue, mNetworkQueue, mCache, mDelivery這些參數,它究竟是什麼東西呢?其實啊,它就是一個線程,繼承自Thread,那個start()方法就沒啥奇特的了,就是Thread的start()方法。重要的東西就是線程體了,來看看是什麼流程。
@Override public void run() { if (DEBUG) VolleyLog.v("start new dispatcher"); Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache. // 初始化緩存區 mCache.initialize(); while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. // 從cache隊列中取出一個request對象,執行阻塞 final Request<?> request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // 設置當前事件名稱 // If the request has been canceled, don't bother dispatching it. // 若是這個request已經被取消了,就直接finish掉,再去cache隊列中取一個新的 if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. // 根據緩存的key獲得一個entry(響應結果)對象 Cache.Entry entry = mCache.get(request.getCacheKey()); // 判斷是否存在 if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. // 發現丟失了,放入networkQueue從新請求服務器 mNetworkQueue.put(request); continue; } // If it is completely expired(過時,期滿), just send it to the network. // 若是它已經徹底過時了,也讓它從新訪問網絡 if (entry.isExpired()) { request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. // 命中到一個緩存 request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); // 判斷是否須要刷新 if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. // 若是根本沒有過時,就發送到response中 mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. // 若是須要從新刷新,那麼就發送給network進行刷新 request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } } }
4、網絡請求處理
講解完RequestQueue中的cacheDispatcher的start()後,咱們來看看NetworkDispatcher.
/** * Starts the dispatchers in this queue. */ public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
在RequestQueuestart()時,直接循環建立了多個NetworkDispatcher對象,而且讓其star().經過上面的分析,咱們有理由猜想NetworkDispatcher也是一個Thread對象,那麼就來看看它的流程。
@Override public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); while (true) { long startTimeMs = SystemClock.elapsedRealtime(); Request<?> request; try { // Take a request from the queue. // 從queue中取出一個request對象 request = mQueue.take(); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { // 記錄當前事務信息 request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. // 若是這個request已經被取消了,那麼就再也不執行網絡請求 if (request.isCanceled()) { request.finish("network-discard-cancelled"); continue; } addTrafficStatsTag(request); // Perform the network request. // 完成網絡請求 NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // 記錄當前事務信息 // If the server returned 304 AND we delivered a response already, // 若是返回304,而且咱們已經分發了響應的結果 // we're done -- don't deliver a second identical response. // 驗證當前是否有更新請求(304是標誌) if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); continue; } // Parse the response here on the worker thread. Response<?> response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // 若是可用,就放入緩存中,這裏要檢查是否容許緩存(磁盤緩存) // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry != null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); mDelivery.postError(request, volleyError); } } }
閱讀完源碼後發現:networkDispatcher作的是一個網絡處理,而mCacheDispatcher作的是緩存處理。
networkDispatcher判斷當前返回的結果是否須要更新,是否須要再次請求網絡;mCacheDispatcher作的是判斷當前是否有緩存,緩存的信息是否可用,不可用時就去請求網絡。兩者的行爲相似,總體構成了RequestQueue的start()方法。咱們能夠理解爲RequestQueue主要作的就是開啓了多個不一樣功能的線程,並進行管理。
5、數據分發器
很抱歉的是,我又把上面的那個start()代碼拿過來了,這回講解的是mDelivery對象。這個對象被看成參數放入了CacheDispatcher、NetworkDispatcher中,它作的就是數據的分發工做。
/** * Starts the dispatchers in this queue. */ public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
ResponseDelivery是一個接口,ExecutorDelivery實現了它。這個接口的代碼以下:
package com.android.volley; public interface ResponseDelivery { /** * Parses a response from the network or cache and delivers it. */ public void postResponse(Request<?> request, Response<?> response); /** * Parses a response from the network or cache and delivers it. The provided * Runnable will be executed after delivery. */ public void postResponse(Request<?> request, Response<?> response, Runnable runnable); /** * Posts an error for the given request. */ public void postError(Request<?> request, VolleyError error); }
內部有重要的分發方法,發送正確結果信息,發送響應失敗的信息。
/** * A Runnable used for delivering network responses to a listener on the * main thread. * 用於分發網絡的響應結果到主線程的一個runnable */ @SuppressWarnings("rawtypes") private class ResponseDeliveryRunnable implements Runnable { private final Request mRequest; private final Response mResponse; private final Runnable mRunnable; public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) { mRequest = request; mResponse = response; mRunnable = runnable; } @SuppressWarnings("unchecked") @Override public void run() { // If this request has canceled, finish it and don't deliver. // 看看是否已經被取消了,若是被取消了,結束它,不作分發處理 if (mRequest.isCanceled()) { mRequest.finish("canceled-at-delivery"); return; } // Deliver a normal response or error, depending. // 分發一個普通的響應結果,多是出錯響應 if (mResponse.isSuccess()) { mRequest.deliverResponse(mResponse.result); } else { mRequest.deliverError(mResponse.error); } // If this is an intermediate response, add a marker, otherwise we're done // and the request can be finished. if (mResponse.intermediate) { mRequest.addMarker("intermediate-response"); } else { mRequest.finish("done"); } // If we have been provided a post-delivery runnable, run it. if (mRunnable != null) { mRunnable.run(); } } }
6、總結
總體流程總結:
其中藍色部分表明主線程,綠色部分表明緩存線程,橙色部分表明網絡線程。咱們在主線程中調用RequestQueue的add()方法來添加一條網絡請求,這條請求會先被加入到緩存隊列當中,若是發現能夠找到相應的緩存結果就直接讀取緩存並解析,而後回調給主線程。若是在緩存中沒有找到結果,則將這條請求加入到網絡請求隊列中,而後處理髮送HTTP請求,解析響應結果,寫入緩存,並回調主線程。
詳細總結:
1. 當一個RequestQueue被成功申請後會開啓一個CacheDispatcher(緩存調度器)和4個(默認)NetworkDispatcher(網絡請求調度器);
2. CacheDispatcher緩存調度器最爲第一層緩衝,開始工做後阻塞的從緩存序列mCacheQueue中取得請求:
a. 對於已經取消了的請求,直接標記爲跳過並結束這個請求
b. 全新或過時的請求,直接丟入mNetworkQueue中交由N個NetworkDispatcher進行處理
c. 已得到緩存信息(網絡應答)卻沒有過時的請求,交由Request的parseNetworkResponse進行解析,從而肯定此應答是否成功。而後將請求和應答交由Delivery分發者進行處理,若是須要更新緩存那麼該請求還會被放入mNetworkQueue中
3. 用戶將請求Request add到RequestQueue以後:
a. 對於不須要緩存的請求(須要額外設置,默認是須要緩存)直接丟入mNetworkQueue交由N個NetworkDispatcher處理;
b. 對於須要緩存的,全新的請求加入到mCacheQueue中給CacheDispatcher處理
c. 須要緩存,可是緩存列表中已經存在了相同URL的請求,放在mWaitingQueue中作暫時雪藏,待以前的請求完畢後,再從新添加到mCacheQueue中;
4. 網絡請求調度器NetworkDispatcher做爲網絡請求真實發生的地方,對消息交給BasicNetwork進行處理,一樣的,請求和結果都交由Delivery分發者進行處理;
5. Delivery分發者實際上已是對網絡請求處理的最後一層了,在Delivery對請求處理以前,Request已經對網絡應答進行過解析,此時應答成功與否已經設定。然後Delivery根據請求所得到的應答狀況作不一樣處理:
a. 若應答成功,則觸發deliverResponse方法,最終會觸發開發者爲Request設定的Listener
b. 若應答失敗,則觸發deliverError方法,最終會觸發開發者爲Request設定的ErrorListener
處理完後,一個Request的生命週期就結束了,Delivery會調用Request的finish操做,將其從mRequestQueue中移除,與此同時,若是等待列表中存在相同URL的請求,則會將剩餘的層級請求所有丟入mCacheQueue交由CacheDispatcher進行處理。
一個Request的生命週期:
1. 經過add加入mRequestQueue中,等待請求被執行;
2. 請求執行後,調用自身的parseNetworkResponse對網絡應答進行處理,並判斷這個應答是否成功;
3. 若成功,則最終會觸發自身被開發者設定的Listener;若失敗,最終會觸發自身被開發者設定的ErrorListener。
更多volley的源碼分析:
http://blog.csdn.net/guolin_blog/article/details/17656437
http://blog.csdn.net/airk000/article/details/39003587
http://blog.csdn.net/ttdevs/article/details/17764351
流程圖來自:
http://www.cnblogs.com/cpacm/p/4211719.html
其餘參考文獻:
http://blog.csdn.net/t12x3456/article/details/9221611