這兩天在看HashMap的時候,被負載因子float loadFactor搞得很暈,通過一天的研究,最後理出了本身的一點我的看法。數組
在HashMap的底層存在着一個名字爲table的Entry數組,在實例化HashMap的時候,會輸入兩個參數,一個是 int initCapacity(初始化數組大小,默認值是16),一個是float loadFactor(負載因子,默認值是0.75),首先會根據initCapacity計算出一個大於或者等於initCapacity且爲2的冪的值capacity,例如initCapacity爲15,那麼capacity就會爲16,還會算出一個臨界值threshold,也就是capacity * loadFactor,貼出源代碼app
/** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity the initial capacity * @param loadFactor the load factor * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); // Find a power of 2 >= initialCapacity int capacity = 1; while (capacity < initialCapacity) capacity <<= 1; this.loadFactor = loadFactor; threshold = (int)(capacity * loadFactor); table = new Entry[capacity]; init(); }
建立完HashMap以後,下面來看put方法,首先會根據key值計算出其HashCode值,而後在經過一個indexFor方法計算出此元素該存放於table數組的哪一個數組之中(我猜測多是經過對table.length的值取餘的操做計算出來的),再檢測此table的此座標位置的entry鏈是否存在此key或者此key值,若存在,則更新此元素的value值。源代碼以下:less
/** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V put(K key, V value) { if (key == null) return putForNullKey(value); int hash = hash(key.hashCode()); int i = indexFor(hash, table.length); for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; }
下面來看看addEntry方法,參數bucketIndex就是當前元素應該插入到entry數組的下標,先取出放在此位置的entry,而後把當前元素放入該數組中,當前元素的next指向以前取出元素,造成entry鏈表。(描述的不是很清楚,大概就是把新加入的entry當成頭放入到數組當中,而後指向以前的鏈表),放入以後就去判斷當前的size是否達到了threshold極限值,若達到了,將會進行擴容。源代碼以下:ide
/** * Adds a new entry with the specified key, value and hash code to * the specified bucket. It is the responsibility of this * method to resize the table if appropriate. * * Subclass overrides this to alter the behavior of put method. */ void addEntry(int hash, K key, V value, int bucketIndex) { Entry<K,V> e = table[bucketIndex]; table[bucketIndex] = new Entry<K,V>(hash, key, value, e); if (size++ >= threshold) resize(2 * table.length); }
下面來看resize方法,方法比較簡單就是生成一個新的table數組(entry數組),而後根據新的Capacity和負載因子去生成新的臨界值。重點是裏面有個transfer方法。源代碼以下:this
/** * Rehashes the contents of this map into a new array with a * larger capacity. This method is called automatically when the * number of keys in this map reaches its threshold. * * If current capacity is MAXIMUM_CAPACITY, this method does not * resize the map, but sets threshold to Integer.MAX_VALUE. * This has the effect of preventing future calls. * * @param newCapacity the new capacity, MUST be a power of two; * must be greater than current capacity unless current * capacity is MAXIMUM_CAPACITY (in which case value * is irrelevant). */ void resize(int newCapacity) { Entry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return; } Entry[] newTable = new Entry[newCapacity]; transfer(newTable); table = newTable; threshold = (int)(newCapacity * loadFactor); }
由於table數組的容量增長了,那麼相應的table的length也增長了,那麼以前存儲的元素的位置也就不同了,好比以前的length是16,如今的length是32,那麼hashCode模16和HashCode模32的結果頗有可能會不同,因此就只有從新去計算新的位置,方法是遍歷數組,在遍歷數組上的entry鏈。(此時就是所謂的rehash??)spa
/** * Transfers all entries from current table to newTable. */ void transfer(Entry[] newTable) { Entry[] src = table; int newCapacity = newTable.length; for (int j = 0; j < src.length; j++) { Entry<K,V> e = src[j]; if (e != null) { src[j] = null; do { Entry<K,V> next = e.next; int i = indexFor(e.hash, newCapacity); e.next = newTable[i]; newTable[i] = e; e = next; } while (e != null); } } }
總結:當負載因子較大時,去給table數組擴容的可能性就會少,因此相對佔用內存較少(空間上較少),可是每條entry鏈上的元素會相對較多,查詢的時間也會增加(時間上較多)。反之就是,負載因子較少的時候,給table數組擴容的可能性就高,那麼內存空間佔用就多,可是entry鏈上的元素就會相對較少,查出的時間也會減小。因此纔有了負載因子是時間和空間上的一種折中的說法。因此設置負載因子的時候要考慮本身追求的是時間仍是空間上的少。code
注意:設置initCapacity的時候,儘可能設置爲2的冪,這樣會去掉計算比initCapactity大,且爲2的冪的數的運算。blog
疑問:感受transfer方法會至關的耗時,是否是不去擴容會比較好?內存