Android Volley源碼分析(一)


volley是一個很是流行的Android開源框架,本身平時也常用它,但本身對於它的內部的實現過程並無進行太多的深究。因此爲了之後能更通透的使用它,瞭解它的實現是一個很是重要的過程。本身有了一點研究,作個筆記同時與你們一塊兒分享。期間本身也畫了一張圖,但願能更好的幫助咱們理解其中的步驟與原理。以下:html

圖片描述

開始看可能會一臉懵逼,咱們先結合源碼一步一步來,如今讓咱們一塊兒進入Volley的源碼世界,來解讀大神們的編程思想。android

newRequestQueue

若是使用過Volley的都知道,不論是進行網絡請求仍是什麼別的圖片加載,首先都要建立好一個RequestQueuegit

public static final RequestQueue volleyQueue = Volley.newRequestQueue(App.mContext);

RequestQueue的建立天然離不開Volley中的靜態方法newRequestQueue,從上面的圖片也能知道首先進入點是newRequestQueue,好了如今咱們來看下該方法中到底作了什麼:github

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }
        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }
        Network network = new BasicNetwork(stack);
        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();
        return queue;
    }

從上面的源碼中咱們能發現有一個stack,它表明着網絡請求通訊HttpClientHttpUrlConnection,但咱們通常都是默認設置爲null。由於它會默認幫咱們進判斷選擇更適合的。當手機版本大於等於9時會使用HurlStack它裏面使用HttpUrlConnection進行實現通訊的,而小於9時則建立HttpClientStack,它裏面天然是HttpClient的實現。經過BasicNetwork構形成Network來進行傳遞使用;在這以後構建了RequestQueue,其中幫咱們構造了一個緩存new DiskBasedCache(cacheDir),默認爲5MB,緩存目錄爲volley。因此咱們能在data/data/應用包名/volley下找到緩存。最後調用其start方法,並返回RequestQueue。這就是咱們前面第一段代碼的內部實現。下面咱們進入RequestQueue中的start方法看下它到底作了什麼呢?編程

RequestQueue

RequestQueue中經過this調用自身來默認幫咱們調用了下面的構造函數緩存

public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

cache後續在NetworkDispatcher中會幫咱們進行response.cacheEntry的緩存,netWork是前面的根據版本所封裝的通訊,threadPoolSize線程池大小默認爲4delivery,是ExecutorDelivery做用在主線程,在最後對請求響應的分發。網絡

start

public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();
        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

start方法中咱們發現它實現了兩個dispatch,一個是CacheDispatcher緩存派遣,另外一個是networkDispatcher進行網絡派遣,其實他們都是Thread,因此都調用了他們的start方法。其中networkDispatcher默認構建了4個,至關於包含4個線程的線程池。如今咱們先不去看他們內部的run方法到底實現了什麼,咱們仍是接着看RequestQueue中咱們頻繁使用的add方法。app

add

public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }
        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");
        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }
        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

咱們結合代碼與前面的圖,首先會爲當前的request設置RequestQueue,而且根據狀況同步是不是當前正在進行的請求,加入到mCurrentRequests中,看11行代碼,以後對當前request進行判斷是否須要緩存(默認實現是true,若是不須要可調用request.setShouldCache()進行設置),若是不須要則直接加入到前面的mNetworkQueue中,它會在CacheDispatcherNetworkDispatcher中作相應的處理,而後返回request。若是須要緩存,看16行代碼,則對mWaitingRequests中是否包含cacheKey進行相應的處理。其中cacheKey爲請求的url。最後再加入到緩存隊列mCacheQueue中。框架

finish

細心的人會發現當對應cacheKeyvalue不爲空時,建立了LinkedListQueue,只是將request加入到了Queue中,只是更新了mWaitingRequests中相應的value但並無加入到mCacheQueue中。其實否則,由於後續會調用finish方法,咱們來看下源碼:ide

<T> void finish(Request<T> request) {
        // Remove from the set of requests currently being processed.
        synchronized (mCurrentRequests) {
            mCurrentRequests.remove(request);
        }
        synchronized (mFinishedListeners) {
          for (RequestFinishedListener<T> listener : mFinishedListeners) {
            listener.onRequestFinished(request);
          }
        }
        if (request.shouldCache()) {
            synchronized (mWaitingRequests) {
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
                if (waitingRequests != null) {
                    if (VolleyLog.DEBUG) {
                        VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    }
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);
                }
            }
        }
    }

1422行代碼,正如上面我所說,會將Queue中的request所有加入到mCacheQueue中。
好了RequestQueue的主要源碼差很少就這些,下面咱們進入CacheDispatcher的源碼分析,看它究竟如何工做的呢?

CacheDispatcher

前面提到了它與NetworkDispatcher本質都是Thread,那麼咱們天然是看run方法

run

@Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
                Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        
        // Make a blocking call to initialize the cache.
        mCache.initialize();
                
        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");
                
                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }
                
                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }
                
                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }
                
                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");
                    
                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);
                    
                    // Mark the response as intermediate.
                    response.intermediate = true;
             
                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                                                            mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
                                                                                          
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

看起來不少,咱們結合圖來挑主要的看。首先建立了一個無限循環一直在監視着request的變化。從緩存隊列mCacheQueue中獲取request,若是該請求是cancle了,調用request.finish()清除相應數據並進行下一個請求的操做,不然從前面提到的mCache中獲取Cache.Entry。若是不存在或者已通過期,將請求加入到網絡隊列中mNetWorkQueue,進行後續的網絡請求。若是存在(41行代碼)則進行request.parseNetworkResponse()解析出response,不一樣的request對應不一樣的解析方法。例如StringRequestJsonObjectRequest有各自的解析實現。再看45行,發現無論entry是否須要更新的,都會進一步對response進行mDelivery.postResponse(request, response)遞送,不一樣的是須要更新的話從新設置requestentry與加入到mNetworkQueue中,也就至關與從新進行網絡請求一遍。那麼再回到遞送的階段,前面已經提到在建立RequestQueue是實現了ExecutorDelivery,mDelivery.postResponse就是其中的方法。咱們來看一下

ExecutorDelivery

在這裏建立了一個Executor,對後面進行遞送,做用在主線程

public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

postResponse

@Override
    public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }
 
    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

這個方法就簡單了就是調用execute進行執行,在進入ResponseDeliveryRunnablerun看它如何執行

ResponseDeliveryRunnable

public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }
 
            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                 mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }
 
            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }
 
            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }

主要是第9行代碼,對於不一樣的響應作不一樣的遞送,deliverResponsedeliverError內部分別調用的就是咱們很是熟悉的Listener中的onResponseonErrorResponse方法,進而返回到咱們對網絡請求結果的處理函數。

這就是整個的緩存派遣,簡而言之,存在請求響應的緩存數據就不進行網絡請求,直接調用緩存中的數據進行分發遞送。反之執行網絡請求。
下面我來看下NetworkDispatcher是如何處理的

NetworkDispatcher

run

@Override
    public void run() {
         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            Request<?> request;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
 
            try {
                request.addMarker("network-queue-take");
 
                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }
 
                addTrafficStatsTag(request);
 
                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");
 
                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }
 
                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");
 
                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                     mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }
 
                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                 volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                 volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

在這裏也建立了一個無限循環的while,一樣也是先獲取request,不過是從mQueue中,即前面屢次出現的mNetWorkQueue,經過看代碼發現一些實現跟CacheDispatcher中的相似。也正如圖片中所展現的同樣,(23行)如何請求取消了,直接finish;不然進行網絡請求,調用(31行)mNetwork.performRequest(request),這裏的mNetWork即爲前面RequestQueue中對不一樣版本進行選擇的stack的封裝,分別調用HurlStackHttpClientStack各自的performRequest方法,該方法中構造請求頭與參數分別使用HttpClient或者HttpUrlConnection進行網絡請求。咱們再來看42行,是否是很熟悉,與CacheDispatcher中的同樣進行response進行解析,而後若是須要緩存就加入到緩存中,最後(54行)再調用mDelivery.postResponse(request, response)進行遞送。至於後面的剩餘的步驟與CacheDispatcher中的如出一轍,這裏就很少累贅了。

好了,Volley的源碼解析先就到這裏了,咱們再回過去看那張圖是否是感受很清晰了呢?

總結

咱們來對使用Volley網絡請求作個總結

  • 經過newRequestQueue初始化與構造RequestQueue
  • 調用RequestQueue中的add方法添加request到請求隊列中
  • 緩存派遣,先進行CacheDispatcher,判斷緩存中是否存在,有則解析response,再直接postResponse遞送,不然進行後續的網絡請求
  • 網絡派遣,NetworkDispatcher中進行相應的request請求,解析response如設置了緩存就將結果保存到cache中,再進行最後的postResponse遞送。

若是有所幫助歡迎關注個人下一次解析
更多分享:我的博客

關注

clipboard.png

相關文章
相關標籤/搜索