記得好久以前我寫了一篇banner的文章,好多朋友找我要代碼,並要我開放banner中使用的圖片管理工廠-ImageManager。若是想很好地理解下面的故事,請參看我半年前寫的兩篇博文:android中圖片的三級cache策略(內存、文件、網絡) 一 和 android中左右滑屏的實現(廣告位banner組件)。當時沒有發上來是因爲以下幾點緣由:首先代碼較多,其次當時寫的時候也參考了網絡上存在的三級cache策略(大同小異),而且採用了Android項目中開源的LruCache頁面淘汰算法(近期最少使用算法),還有一點就是這是實際項目使用的代碼,不便直接開放,可是如今我決定把它稍做修改後開放給你們。這裏我想說說那個banner,平心而論,banner的代碼不少,若是採用ViewPager之類的則能夠減小很多代碼,可是我更看重banner的實現思想以及它的封裝和事件傳遞,在自定義控件的封裝和架構上,我到如今還以爲banner是及其成功的,尤爲是banner和ImageManager結合之後,整個功能渾然天成,超高內聚,使用起來及其方便,最少只須要兩行代碼,你不須要導入xml,也不須要處理Json拉取策略,由於相關業務層都被封裝在了banner內部,對外只保留不多的幾個接口,只要實現它就能和banner內部進行交互。下面我將要介紹三級cache策略之二:內存緩存策略。java
當有一個圖片要去從網絡下載的時候,咱們並不會直接去從網絡下載,由於在這個時代,用戶的流量是寶貴的,耗流量的應用是不會獲得用戶的青睞的。那咱們該怎麼辦呢?這樣,咱們會先從內存緩存中去查找是否有該圖片,若是沒有就去文件緩存中查找是否有該圖片,若是尚未,咱們就從網絡下載圖片。本博文的側重點是如何作內存緩存,內存緩存的查找策略是:先從強引用緩存中查找,若是沒有再從軟引用緩存中查找,若是在軟引用緩存中找到了,就把它移入強引用緩存;若是強引用緩存滿了,就會根據Lru算法把某些圖片移入軟引用緩存,若是軟引用緩存也滿了,最先的軟引用就會被刪除。這裏,我有必要說明下幾個概念:強引用、軟引用、弱引用、Lru。android
強引用:就是直接引用一個對象,通常的對象引用均是強引用算法
軟引用:引用一個對象,當內存不足而且除了咱們的引用以外沒有其餘地方引用此對象的狀況 下,該對象會被gc回收express
弱引用:引用一個對象,當除了咱們的引用以外沒有其餘地方引用此對象的狀況下,只要gc被調用,它就會被回收(請注意它和軟引用的區別)apache
Lru:Least Recently Used 近期最少使用算法,是一種頁面置換算法,其思想是在緩存的頁面數目固定的狀況下,那些最近使用次數最少的頁面將被移出,對於咱們的內存緩存來講,強引用緩存大小固定爲4M,若是當緩存的圖片大於4M的時候,有些圖片就會被從強引用緩存中刪除,哪些圖片會被刪除呢,就是那些近期使用次數最少的圖片。緩存
public class ImageMemoryCache { /** * 從內存讀取數據速度是最快的,爲了更大限度使用內存,這裏使用了兩層緩存。 * 強引用緩存不會輕易被回收,用來保存經常使用數據,不經常使用的轉入軟引用緩存。 */ private static final String TAG = "ImageMemoryCache"; private static LruCache<String, Bitmap> mLruCache; // 強引用緩存 private static LinkedHashMap<String, SoftReference<Bitmap>> mSoftCache; // 軟引用緩存 private static final int LRU_CACHE_SIZE = 4 * 1024 * 1024; // 強引用緩存容量:4MB private static final int SOFT_CACHE_NUM = 20; // 軟引用緩存個數 // 在這裏分別初始化強引用緩存和弱引用緩存 public ImageMemoryCache() { mLruCache = new LruCache<String, Bitmap>(LRU_CACHE_SIZE) { @Override // sizeOf返回爲單個hashmap value的大小 protected int sizeOf(String key, Bitmap value) { if (value != null) return value.getRowBytes() * value.getHeight(); else return 0; } @Override protected void entryRemoved(boolean evicted, String key, Bitmap oldValue, Bitmap newValue) { if (oldValue != null) { // 強引用緩存容量滿的時候,會根據LRU算法把最近沒有被使用的圖片轉入此軟引用緩存 Logger.d(TAG, "LruCache is full,move to SoftRefernceCache"); mSoftCache.put(key, new SoftReference<Bitmap>(oldValue)); } } }; mSoftCache = new LinkedHashMap<String, SoftReference<Bitmap>>( SOFT_CACHE_NUM, 0.75f, true) { private static final long serialVersionUID = 1L; /** * 當軟引用數量大於20的時候,最舊的軟引用將會被從鏈式哈希表中移出 */ @Override protected boolean removeEldestEntry( Entry<String, SoftReference<Bitmap>> eldest) { if (size() > SOFT_CACHE_NUM) { Logger.d(TAG, "should remove the eldest from SoftReference"); return true; } return false; } }; } /** * 從緩存中獲取圖片 */ public Bitmap getBitmapFromMemory(String url) { Bitmap bitmap; // 先從強引用緩存中獲取 synchronized (mLruCache) { bitmap = mLruCache.get(url); if (bitmap != null) { // 若是找到的話,把元素移到LinkedHashMap的最前面,從而保證在LRU算法中是最後被刪除 mLruCache.remove(url); mLruCache.put(url, bitmap); Logger.d(TAG, "get bmp from LruCache,url=" + url); return bitmap; } } // 若是強引用緩存中找不到,到軟引用緩存中找,找到後就把它從軟引用中移到強引用緩存中 synchronized (mSoftCache) { SoftReference<Bitmap> bitmapReference = mSoftCache.get(url); if (bitmapReference != null) { bitmap = bitmapReference.get(); if (bitmap != null) { // 將圖片移回LruCache mLruCache.put(url, bitmap); mSoftCache.remove(url); Logger.d(TAG, "get bmp from SoftReferenceCache, url=" + url); return bitmap; } else { mSoftCache.remove(url); } } } return null; } /** * 添加圖片到緩存 */ public void addBitmapToMemory(String url, Bitmap bitmap) { if (bitmap != null) { synchronized (mLruCache) { mLruCache.put(url, bitmap); } } } public void clearCache() { mSoftCache.clear(); } }
另外,給出LruCache供你們參考:網絡
/* * Copyright (C) 2011 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * Cache保存一個強引用來限制內容數量,每當Item被訪問的時候,此Item就會移動到隊列的頭部。 * 當cache已滿的時候加入新的item時,在隊列尾部的item會被回收。 * * 若是你cache的某個值須要明確釋放,重寫entryRemoved() * * 若是key相對應的item丟掉啦,重寫create().這簡化了調用代碼,即便丟失了也總會返回。 * * 默認cache大小是測量的item的數量,重寫sizeof計算不一樣item的大小。 * * <pre> {@code * int cacheSize = 4 * 1024 * 1024; // 4MiB * LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) { * protected int sizeOf(String key, Bitmap value) { * return value.getByteCount(); * } * }}</pre> * * <p>This class is thread-safe. Perform multiple cache operations atomically by * synchronizing on the cache: <pre> {@code * synchronized (cache) { * if (cache.get(key) == null) { * cache.put(key, value); * } * }}</pre> * * 不容許key或者value爲null * 當get(),put(),remove()返回值爲null時,key相應的項不在cache中 */ public class LruCache<K, V> { private final LinkedHashMap<K, V> map; /** Size of this cache in units. Not necessarily the number of elements. */ private int size;//已經存儲的大小 private int maxSize;//規定的最大存儲空間 private int putCount;//put的次數 private int createCount;//create的次數 private int evictionCount; //回收的次數 private int hitCount;//命中的次數 private int missCount;//丟失的次數 /** * @param maxSize for caches that do not override {@link #sizeOf}, this is * the maximum number of entries in the cache. For all other caches, * this is the maximum sum of the sizes of the entries in this cache. */ public LruCache(int maxSize) { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } this.maxSize = maxSize; this.map = new LinkedHashMap<K, V>(0, 0.75f, true); } /** *經過key返回相應的item,或者建立返回相應的item。相應的item會移動到隊列的頭部, * 若是item的value沒有被cache或者不能被建立,則返回null。 */ public final V get(K key) { if (key == null) { throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { mapValue = map.get(key); if (mapValue != null) { hitCount++; return mapValue; } missCount++; } /* * Attempt to create a value. This may take a long time, and the map * may be different when create() returns. If a conflicting value was * added to the map while create() was working, we leave that value in * the map and release the created value. */ V createdValue = create(key); if (createdValue == null) { return null; } synchronized (this) { createCount++; mapValue = map.put(key, createdValue); if (mapValue != null) { // There was a conflict so undo that last put map.put(key, mapValue); } else { size += safeSizeOf(key, createdValue); } } if (mapValue != null) { entryRemoved(false, key, createdValue, mapValue); return mapValue; } else { trimToSize(maxSize); return createdValue; } } /** * Caches {@code value} for {@code key}. The value is moved to the head of * the queue. * * @return the previous value mapped by {@code key}. */ public final V put(K key, V value) { if (key == null || value == null) { throw new NullPointerException("key == null || value == null"); } V previous; synchronized (this) { putCount++; size += safeSizeOf(key, value); previous = map.put(key, value); if (previous != null) { size -= safeSizeOf(key, previous); } } if (previous != null) { entryRemoved(false, key, previous, value); } trimToSize(maxSize); return previous; } /** * @param maxSize the maximum size of the cache before returning. May be -1 * to evict even 0-sized elements. */ private void trimToSize(int maxSize) { while (true) { K key; V value; synchronized (this) { if (size < 0 || (map.isEmpty() && size != 0)) { throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!"); } if (size <= maxSize) { break; } /* * Map.Entry<K, V> toEvict = map.eldest(); */ //modify by echy Iterator<Entry<K, V>> iter = map.entrySet().iterator(); Map.Entry<K, V> toEvict = null; while (iter.hasNext()) { toEvict = (Entry<K, V>) iter.next(); break; } if (toEvict == null) { break; } key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); } } /** * Removes the entry for {@code key} if it exists. * * @return the previous value mapped by {@code key}. */ public final V remove(K key) { if (key == null) { throw new NullPointerException("key == null"); } V previous; synchronized (this) { previous = map.remove(key); if (previous != null) { size -= safeSizeOf(key, previous); } } if (previous != null) { entryRemoved(false, key, previous, null); } return previous; } /** * Called for entries that have been evicted or removed. This method is * invoked when a value is evicted to make space, removed by a call to * {@link #remove}, or replaced by a call to {@link #put}. The default * implementation does nothing. * * <p>The method is called without synchronization: other threads may * access the cache while this method is executing. * * @param evicted true if the entry is being removed to make space, false * if the removal was caused by a {@link #put} or {@link #remove}. * @param newValue the new value for {@code key}, if it exists. If non-null, * this removal was caused by a {@link #put}. Otherwise it was caused by * an eviction or a {@link #remove}. */ protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {} /** * Called after a cache miss to compute a value for the corresponding key. * Returns the computed value or null if no value can be computed. The * default implementation returns null. * * <p>The method is called without synchronization: other threads may * access the cache while this method is executing. * * <p>If a value for {@code key} exists in the cache when this method * returns, the created value will be released with {@link #entryRemoved} * and discarded. This can occur when multiple threads request the same key * at the same time (causing multiple values to be created), or when one * thread calls {@link #put} while another is creating a value for the same * key. */ protected V create(K key) { return null; } private int safeSizeOf(K key, V value) { int result = sizeOf(key, value); if (result < 0) { throw new IllegalStateException("Negative size: " + key + "=" + value); } return result; } /** * Returns the size of the entry for {@code key} and {@code value} in * user-defined units. The default implementation returns 1 so that size * is the number of entries and max size is the maximum number of entries. * * <p>An entry's size must not change while it is in the cache. */ protected int sizeOf(K key, V value) { return 1; } /** * Clear the cache, calling {@link #entryRemoved} on each removed entry. */ public final void evictAll() { trimToSize(-1); // -1 will evict 0-sized elements } /** * For caches that do not override {@link #sizeOf}, this returns the number * of entries in the cache. For all other caches, this returns the sum of * the sizes of the entries in this cache. */ public synchronized final int size() { return size; } /** * For caches that do not override {@link #sizeOf}, this returns the maximum * number of entries in the cache. For all other caches, this returns the * maximum sum of the sizes of the entries in this cache. */ public synchronized final int maxSize() { return maxSize; } /** * Returns the number of times {@link #get} returned a value that was * already present in the cache. */ public synchronized final int hitCount() { return hitCount; } /** * Returns the number of times {@link #get} returned null or required a new * value to be created. */ public synchronized final int missCount() { return missCount; } /** * Returns the number of times {@link #create(Object)} returned a value. */ public synchronized final int createCount() { return createCount; } /** * Returns the number of times {@link #put} was called. */ public synchronized final int putCount() { return putCount; } /** * Returns the number of values that have been evicted. */ public synchronized final int evictionCount() { return evictionCount; } /** * Returns a copy of the current contents of the cache, ordered from least * recently accessed to most recently accessed. */ public synchronized final Map<K, V> snapshot() { return new LinkedHashMap<K, V>(map); } @Override public synchronized final String toString() { int accesses = hitCount + missCount; int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0; return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]", maxSize, hitCount, missCount, hitPercent); } }