回顧上期✈觀光線路圖:putAll() --> putMapEntries() --> tableSizeFor() --> resize() --> hash() --> putVal()...html
本期與您繼續一塊兒前進:putVal() --> putTreeVal() --> find() --> balanceInsertion() --> rotateLeft()/rotateRight() --> treeifyBin()...java
// 爲了找到合適的位置插入新節點,源碼中進行了一系列比較。 final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, int h, K k, V v) { Class<?> kc = null; boolean searched = false; TreeNode<K,V> root = (parent != null) ? root() : this; // 獲取根節點,從根節點開始遍歷 for (TreeNode<K,V> p = root;;) { int dir, ph; K pk; if ((ph = p.hash) > h) dir = -1; // 左 else if (ph < h) dir = 1; // 右 else if ((pk = p.key) == k || (k != null && k.equals(pk))) return p; // 相等直接返回 else if ((kc == null && (kc = comparableClassFor(k)) == null) || (dir = compareComparables(kc, k, pk)) == 0) { if (!searched) { TreeNode<K,V> q, ch; searched = true; if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null) || ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null)) return q; } dir = tieBreakOrder(k, pk); } TreeNode<K,V> xp = p; if ((p = (dir <= 0) ? p.left : p.right) == null) { Node<K,V> xpn = xp.next; TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn); if (dir <= 0) xp.left = x; else xp.right = x; xp.next = x; x.parent = x.prev = xp; if (xpn != null) ((TreeNode<K,V>)xpn).prev = x; moveRootToFront(tab, balanceInsertion(root, x)); return null; } } }
當前節點hash值(ph)與插入節點hash值(h)比較,
若ph > h(dir=-1),將新節點歸爲左子樹;
若ph < h(dir=1),右子樹;
不然即表示hash值相等,而後又對key進行了比較。segmentfault
「kc = comparableClassFor(k)) == null」表示該類自己不可比(class C don't implements Comparable<C>);「dir = compareComparables(kc, k, pk)) == 0」表示k與pk對應的Class之間不可比。searched爲一次性開關僅在p爲root時生效,遍歷比較左右子樹中是否存在於插入節點相等的。安全
最後比到tieBreakOrder()中的「System.identityHashCode(a) <= System.identityHashCode(b)」,即對象的內存地址來生成的hashCode相互比較。堪稱鐵杵磨成針的比較。數據結構
這裏循環的推動是靠「if ((p = (dir <= 0) ? p.left : p.right) == null)」,以前千辛萬苦比較出的dir也在這使用。直到爲空的左/右子樹節點,插入新值(新值插入的過程參考下圖理解)。多線程
final TreeNode<K,V> find(int h, Object k, Class<?> kc) { TreeNode<K,V> p = this; do { int ph, dir; K pk; TreeNode<K,V> pl = p.left, pr = p.right, q; if ((ph = p.hash) > h) p = pl; else if (ph < h) p = pr; else if ((pk = p.key) == k || (k != null && k.equals(pk))) return p; else if (pl == null) p = pr; else if (pr == null) p = pl; else if ((kc != null || (kc = comparableClassFor(k)) != null) && (dir = compareComparables(kc, k, pk)) != 0) p = (dir < 0) ? pl : pr; else if ((q = pr.find(h, k, kc)) != null) return q; else p = pl; } while (p != null); return null; }
有沒有發現,若是當你看懂putTreeVal,類比find是否是變得很好理解了呢?併發
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root, TreeNode<K,V> x) { x.red = true; // x爲紅 for (TreeNode<K,V> xp, xpp, xppl, xppr;;) { // x爲根 if ((xp = x.parent) == null) { x.red = false; return x; } // x父節點爲黑 || x父節點爲根(黑) else if (!xp.red || (xpp = xp.parent) == null) return root; // if (xp == (xppl = xpp.left)) { // ① if ((xppr = xpp.right) != null && xppr.red) { xppr.red = false; xp.red = false; xpp.red = true; x = xpp; } // ② else { if (x == xp.right) { root = rotateLeft(root, x = xp); xpp = (xp = x.parent) == null ? null : xp.parent; } if (xp != null) { xp.red = false; if (xpp != null) { xpp.red = true; root = rotateRight(root, xpp); } } } } else { if (xppl != null && xppl.red) { xppl.red = false; xp.red = false; xpp.red = true; x = xpp; } else { if (x == xp.left) { root = rotateRight(root, x = xp); xpp = (xp = x.parent) == null ? null : xp.parent; } if (xp != null) { xp.red = false; if (xpp != null) { xpp.red = true; root = rotateLeft(root, xpp); } } } } } }
在插入新值後,可能打破了紅黑樹原有的「平衡」,balanceInsertion()的做用就是要維持這種「平衡」,保證最佳效率。所謂的紅黑樹「平衡」即:app
①:全部節點非黑即紅;異步
②:根爲黑,葉子爲null且爲黑,紅的兩子節點爲黑;ide
③:任一節點到葉子節點的黑子節點個數相同;
下面,以「(xp == (xppl = xpp.left))」簡(chou)單(lou)的給你們畫個圖例(其中①②與源碼註釋相對應)。
圖②中打鉤的都是合格的紅黑樹其實,圖②右邊方框內爲左旋右旋節點關係變化圖解。
// 左旋 與 右旋 static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root, TreeNode<K,V> p) { TreeNode<K,V> r, pp, rl; if (p != null && (r = p.right) != null) { if ((rl = p.right = r.left) != null) rl.parent = p; if ((pp = r.parent = p.parent) == null) (root = r).red = false; else if (pp.left == p) pp.left = r; // p爲pp左子節點 else pp.right = r; r.left = p; p.parent = r; } return root; } static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root, TreeNode<K,V> p) { TreeNode<K,V> l, pp, lr; if (p != null && (l = p.left) != null) { if ((lr = p.left = l.right) != null) lr.parent = p; if ((pp = l.parent = p.parent) == null) (root = l).red = false; else if (pp.right == p) pp.right = l; else pp.left = l; l.right = p; p.parent = l; } return root; }
左旋右旋過程包含在上面的圖例中了,另附上兩張網上看到的動圖便於你們理解。
同時,在線紅黑樹插入刪除動畫演示【點我】,還不理解的童鞋能夠親自直觀的試試。
final void treeifyBin(Node<K,V>[] tab, int hash) { int n, index; Node<K,V> e; if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { TreeNode<K,V> hd = null, tl = null; do { TreeNode<K,V> p = replacementTreeNode(e, null); if (tl == null) hd = p; else { p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); if ((tab[index] = hd) != null) hd.treeify(tab); } }
putVal()的treeifyBin()在鏈表中數目大於等於「TREEIFY_THRESHOLD - 1」時觸發。當數目知足MIN_TREEIFY_CAPACITY時,鏈表將轉爲紅黑樹結構,不然繼續擴容。treeify()相似putTreeVal()。
至此,HashMap插入告一段落。有誤或有讀不懂的地方歡迎交流。時間有限,江湖再見。
更多有意思的內容,歡迎訪問筆者小站: rebey.cn
附上前一段時間翻譯的HashMap源碼開篇註釋,將開頭做爲總結。也算收尾呼應吧。
/** * Hash table based implementation of the <tt>Map</tt> interface. This * implementation provides all of the optional map operations, and permits * <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt> * class is roughly equivalent to <tt>Hashtable</tt>, except that it is * unsynchronized and permits nulls.) This class makes no guarantees as to * the order of the map; in particular, it does not guarantee that the order * will remain constant over time. * * 哈希表實現了Map接口。該接口提供了全部可選的map操做,且容許鍵、值爲空。(HashMap近似Hashtable,除了異步和 * 容許空值。)HashMap沒法保證map的順序;尤爲是<b>持久</b>不變。(譯者注:好比rehash。) * * <p>This implementation provides constant-time performance for the basic * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function * disperses the elements properly among the buckets. Iteration over * collection views requires time proportional to the "capacity" of the * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number * of key-value mappings). Thus, it's very important not to set the initial * capacity too high (or the load factor too low) if iteration performance is * important. * * 在哈希函數將元素恰當的分佈在桶中的狀況下,接口提供了穩定的基礎操做(get和put)。 * 遍歷集合的時間與HashMap實例的 「容量」(hash桶的數量) + 「大小」(鍵值對數量)的和成正比。 * 所以,當循環比重較大時,初始容量值不能設的太大(或者負載因子過小)是很是重要的。 * * <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>. The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created. The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased. When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets. * * 兩個參數影響着HashMap實例:「初始容量」和「負載因子」。「初始容量」指的是哈希表中桶的數量,在哈希表建立的同時初始化。 * 「負載因子」度量着哈希表能裝多滿(譯者注:相對於桶的形象概念,建議參看網上hashMap結構圖理解)直到在自動擴容。 * 當超出時,哈希表將會rehashed(即內部數據結構重建)至大約兩倍。 * * <p>As a general rule, the default load factor (.75) offers a good * tradeoff between time and space costs. Higher values decrease the * space overhead but increase the lookup cost (reflected in most of * the operations of the <tt>HashMap</tt> class, including * <tt>get</tt> and <tt>put</tt>). The expected number of entries in * the map and its load factor should be taken into account when * setting its initial capacity, so as to minimize the number of * rehash operations. If the initial capacity is greater than the * maximum number of entries divided by the load factor, no rehash * operations will ever occur. * * 通常來講,默認負載因子(0.75)在時間和空間之間起到了很好的權衡。更大的值雖然減輕了空間負荷卻增長了查找花銷 * (在大多數HashMap操做上都有體現,包括get和put)。當設置map初始容量時,須要考慮預期條目數和它的負載因子 * 使得rehash操做次數最少。若是初始容量大於最大條目數/負載因子,甚至不會發生rehash。 * * <p>If many mappings are to be stored in a <tt>HashMap</tt> * instance, creating it with a sufficiently large capacity will allow * the mappings to be stored more efficiently than letting it perform * automatic rehashing as needed to grow the table. Note that using * many keys with the same {@code hashCode()} is a sure way to slow * down performance of any hash table. To ameliorate impact, when keys * are {@link Comparable}, this class may use comparison order among * keys to help break ties. * * 若是大量的鍵值對將存儲在HashMap實例中,使用一個足夠大的容量來初始化遠比讓它自動按需rehash擴容的效率高。 * 要注意的是使用許多有這相同hashCode()的鍵值確定會下降哈希表性能。爲了下降影響,當key支持Comparable接口時, * 在keys間比較排序來打破瓶頸。 * * <p><strong>Note that this implementation is not synchronized.</strong> * If multiple threads access a hash map concurrently, and at least one of * the threads modifies the map structurally, it <i>must</i> be * synchronized externally. (A structural modification is any operation * that adds or deletes one or more mappings; merely changing the value * associated with a key that an instance already contains is not a * structural modification.) This is typically accomplished by * synchronizing on some object that naturally encapsulates the map. * * HashMap是非線程安全的。若是多線程同時訪問一個哈希表,而且至少一個線程在修改map結構是,必須在外加上 * synchronized關鍵字。(結構化修改包括任何增刪一個或者多個鍵值對;只修改一個已存在的key的值不屬於 * 結構修改。)典型的是用同步對象封裝map實現。 * * If no such object exists, the map should be "wrapped" using the * {@link Collections#synchronizedMap Collections.synchronizedMap} * method. This is best done at creation time, to prevent accidental * unsynchronized access to the map:<pre> * Map m = Collections.synchronizedMap(new HashMap(...));</pre> * * 若是沒有這樣的對象,map須要使用Collections.synchronizedMap方法封裝。最好室在建立的時候,防止意外 * 異步訪問map,如:Map m = Collections.synchronizedMap(new HashMap(...)); * * <p>The iterators returned by all of this class's "collection view methods" * are <i>fail-fast</i>: if the map is structurally modified at any time after * the iterator is created, in any way except through the iterator's own * <tt>remove</tt> method, the iterator will throw a * {@link ConcurrentModificationException}. Thus, in the face of concurrent * modification, the iterator fails quickly and cleanly, rather than risking * arbitrary, non-deterministic behavior at an undetermined time in the * future. * * 迭代器返回了類全部「集合視圖方法」是fail-fast(錯誤的緣由):迭代器建立後,在任什麼時候候進行結構化修改將會拋出 * ConcurrentModificationException,不包括迭代器自己的remove方法,所以,在併發修改時,迭代器寧 * 可快速而乾淨的拋錯,也不任意存在,在不肯定的行爲,在不肯定的時間的將來。(譯者注:意會下吧各位- -) * * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed * as it is, generally speaking, impossible to make any hard guarantees in the * presence of unsynchronized concurrent modification. Fail-fast iterators * throw <tt>ConcurrentModificationException</tt> on a best-effort basis. * Therefore, it would be wrong to write a program that depended on this * exception for its correctness: <i>the fail-fast behavior of iterators * should be used only to detect bugs.</i> * * 迭代器不能保證fail-fast行爲,通常而言,在異步併發修改面前,不可能作 任何嚴格的保證。Fail-fast迭代器盡力地拋 * ConcurrentModificationException。所以,編寫一個依賴於這個異常正確性的程序是錯誤的: * fail-fast行爲只是用來檢測BUG. * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @param <K> the type of keys maintained by this map * @param <V> the type of mapped values * * @author Doug Lea * @author Josh Bloch * @author Arthur van Hoff * @author Neal Gafter * @see Object#hashCode() * @see Collection * @see Map * @see TreeMap * @see Hashtable * @since 1.2 */