先擺上JDK1.8中HashMap的類註釋;我翻譯了一下html
/** * Hash table based implementation of the <tt>Map</tt> interface. This * implementation provides all of the optional map operations, and permits * <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt> * class is roughly equivalent to <tt>Hashtable</tt>, except that it is * unsynchronized and permits nulls.) This class makes no guarantees as to * the order of the map; in particular, it does not guarantee that the order * will remain constant over time. * 哈希表實現了Map接口.哈希表容許空的鍵和值.HashMap至關於Hashtable,只是它是線程不一樣步的且容許空值. * 這個實現類不保證映射的順序,尤爲是隨着時間變化不保證映射的順序不發生變化. * * <p>This implementation provides constant-time performance for the basic * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function * disperses the elements properly among the buckets. Iteration over * collection views requires time proportional to the "capacity" of the * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number * of key-value mappings). Thus, it's very important not to set the initial * capacity too high (or the load factor too low) if iteration performance is * important. * 當哈希表中的元素都是正常分佈的,get,put操做時間複雜度都是O(1).迭代一個哈希表所須要的時間與哈希表的容量成正比, * 所以若是迭代性能不是很重要,不要將初始容量設置過高(或負載因子過低). * * <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>. The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created. The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased. When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets. * 一個HashMap兩個參數會影響它的性能,初始容量和負載因子.容量是哈希表中的數據量,初始容量只是建立哈希表時的指定的哈希表容量. * 負載因子loadFactor= HashMap中的數據量/HashMap中的總容量(initial capacity),map中的數據量達到 總容量*負載因子時, * HashMap的總容量就會自動擴張一倍. * 注:HashMap的三個構造函數能幫助理解,見本人博客 "HashMap構造函數" * * * <p>As a general rule, the default load factor (.75) offers a good * tradeoff between time and space costs. Higher values decrease the * space overhead but increase the lookup cost (reflected in most of * the operations of the <tt>HashMap</tt> class, including * <tt>get</tt> and <tt>put</tt>). The expected number of entries in * the map and its load factor should be taken into account when * setting its initial capacity, so as to minimize the number of * rehash operations. If the initial capacity is greater than the * maximum number of entries divided by the load factor, no rehash * operations will ever occur. * 做爲通常規則,默認負載因子是0.75,這在時間成本和空間成本之間提供了一個比較好的平衡. * 較高的值會下降空間開銷,但增長查找成本(大部分體如今對HashMap的操做上,好比get,put). * 在設置初始容量時,應當考慮map中的數據量和負載因子,以便最小化從新對map結構進行擴容的操做. * 注:這和ArrayList的ensureCapaciry(int size)有點相似. * * * <p>If many mappings are to be stored in a <tt>HashMap</tt> * instance, creating it with a sufficiently large capacity will allow * the mappings to be stored more efficiently than letting it perform * automatic rehashing as needed to grow the table. Note that using * many keys with the same {@code hashCode()} is a sure way to slow * down performance of any hash table. To ameliorate impact, when keys * are {@link Comparable}, this class may use comparison order among * keys to help break ties. * 若是許多映射要存儲在HashMap實例中,那麼將實例的初始容量設置比較大要比按須要自動擴容效率要高. * 請注意,使用相同的鍵(hashCode)是下降任何哈希表的一個可靠方法,爲了改善這種性能問題,使用 * 連續的鍵. * 注:hashMap容許相同的key,只是值會被覆蓋. * * <p><strong>Note that this implementation is not synchronized.</strong> * If multiple threads access a hash map concurrently, and at least one of * the threads modifies the map structurally, it <i>must</i> be * synchronized externally. (A structural modification is any operation * that adds or deletes one or more mappings; merely changing the value * associated with a key that an instance already contains is not a * structural modification.) This is typically accomplished by * synchronizing on some object that naturally encapsulates the map. * 注意,此實現線程不一樣步.若是多個線程同時訪問hashMap,其中有一個線程修改了hashMap的結構.那麼必須在 * 外部進行線程同步處理.這一般是對hashMap的操做上進行同步處理封裝 * * If no such object exists, the map should be "wrapped" using the * {@link Collections#synchronizedMap Collections.synchronizedMap} * method. This is best done at creation time, to prevent accidental * unsynchronized access to the map:<pre> * Map m = Collections.synchronizedMap(new HashMap(...));</pre> * 由於線程同步問題,能夠調用Collections.synchronizedMap來(Map<K,V> m)來返回一個線程安全的hashMap. * * <p>The iterators returned by all of this class's "collection view methods" * are <i>fail-fast</i>: if the map is structurally modified at any time after * the iterator is created, in any way except through the iterator's own * <tt>remove</tt> method, the iterator will throw a * {@link ConcurrentModificationException}. Thus, in the face of concurrent * modification, the iterator fails quickly and cleanly, rather than risking * arbitrary, non-deterministic behavior at an undetermined time in the * future. * hashMap一樣是快速失敗機制.參考個人博客 "Iterator fail-fast" * * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed * as it is, generally speaking, impossible to make any hard guarantees in the * presence of unsynchronized concurrent modification. Fail-fast iterators * throw <tt>ConcurrentModificationException</tt> on a best-effort basis. * Therefore, it would be wrong to write a program that depended on this * exception for its correctness: <i>the fail-fast behavior of iterators * should be used only to detect bugs.</i> * 快速失敗機制,是一種錯誤檢測機制.它只能被用來檢測錯誤,JDK並不保證fail-fast必定會發生. * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @param <K> the type of keys maintained by this map * @param <V> the type of mapped values * * @author Doug Lea * @author Josh Bloch * @author Arthur van Hoff * @author Neal Gafter * @see Object#hashCode() * @see Collection * @see Map * @see TreeMap * @see Hashtable * @since 1.2 */ public class HashMap<K,V> extends AbstractMap<K,V> implements Map<K,V>, Cloneable, Serializable
初始化一個HashMap<String,String>,存儲了一些測試數據,java
1 Map map = new HashMap(); 2 map.put("What", "chenyz"); 3 map.put("You", "chenyz"); 4 map.put("Don't", "chenyz"); 5 map.put("Know", "chenyz"); 6 map.put("About", "chenyz"); 7 map.put("Geo", "chenyz"); 8 map.put("APIs", "chenyz"); 9 map.put("Can't", "chenyz"); 10 map.put("Hurt", "chenyz"); 11 map.put("you", "chenyz"); 12 map.put("google", "chenyz"); 13 map.put("map", "chenyz"); 14 map.put("hello", "chenyz"); 15 // 每一個<k,v>的hash(key)以下: 16 // What-->hash值:8 17 // You-->hash值:3 18 // Don't-->hash值:7 19 // Know-->hash值:13 20 // About-->hash值:11 21 // Geo-->hash值:12 22 // APIs-->hash值:1 23 // Can't-->hash值:7 24 // Hurt-->hash值:1 25 // you-->hash值:10 26 // google-->hash值:3 27 // map-->hash值:8 28 // hello-->hash值:0
此時HashMap的內部如 圖1:node
在Java中,最基礎的數據類型是數組和鏈表,HashMap是二者結合體。因此在數據結構中被稱做「鏈表散列」。每建立一個HashMap,就初始化一個數組。git
關於這個數組源碼是這樣定義的:github
1 /** 2 * The table, initialized on first use, and resized as 3 * necessary. When allocated, length is always a power of two. 4 * (We also tolerate length zero in some operations to allow 5 * bootstrapping mechanics that are currently not needed.) 6 * 該表首先使用初始化,並根據須要調整大小.分配時長度老是2的冪. 7 * (這個數組的長度有些時候也容許爲0) 8 */ 9 transient Node<K,V>[] table; 10 11 static class Node<K,V> implements Map.Entry<K,V> { 12 final int hash; 13 final K key; 14 V value; 15 Node<K,V> next; 16 17 Node(int hash, K key, V value, Node<K,V> next) { 18 this.hash = hash; 19 this.key = key; 20 this.value = value; 21 this.next = next; 22 } 23 }
因此,數組中存儲的是一個Map.Entry<K,V>,它由hash,key,value,next組成。next是指向下一個元素的引用,這就造成了鏈表。算法
源碼以下 :bootstrap
1 /** 2 * Associates the specified value with the specified key in this map. 3 * If the map previously contained a mapping for the key, the old 4 * value is replaced. 5 * 6 * @param key key with which the specified value is to be associated 7 * @param value value to be associated with the specified key 8 * @return the previous value associated with <tt>key</tt>, or 9 * <tt>null</tt> if there was no mapping for <tt>key</tt>. 10 * (A <tt>null</tt> return can also indicate that the map 11 * previously associated <tt>null</tt> with <tt>key</tt>.) 12 */ 13 public V put(K key, V value) { 14 return putVal(hash(key), key, value, false, true); 15 } 16 17 /** 18 * Implements Map.put and related methods 19 * 20 * @param hash hash for key 21 * @param key the key 22 * @param value the value to put 23 * @param onlyIfAbsent if true, don't change existing value 24 * @param evict if false, the table is in creation mode. 25 * @return previous value, or null if none 26 */ 27 final V putVal(int hash, K key, V value, boolean onlyIfAbsent, 28 boolean evict) { 29 Node<K,V>[] tab; Node<K,V> p; int n, i; 30 if ((tab = table) == null || (n = tab.length) == 0) 31 n = (tab = resize()).length; 32 if ((p = tab[i = (n - 1) & hash]) == null) 33 tab[i] = newNode(hash, key, value, null); 34 else { 35 Node<K,V> e; K k; 36 if (p.hash == hash && 37 ((k = p.key) == key || (key != null && key.equals(k)))) 38 e = p; 39 else if (p instanceof TreeNode) 40 e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); 41 else { 42 for (int binCount = 0; ; ++binCount) { 43 if ((e = p.next) == null) { 44 p.next = newNode(hash, key, value, null); 45 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st 46 treeifyBin(tab, hash); 47 break; 48 } 49 if (e.hash == hash && 50 ((k = e.key) == key || (key != null && key.equals(k)))) 51 break; 52 p = e; 53 } 54 } 55 if (e != null) { // existing mapping for key 56 V oldValue = e.value; 57 if (!onlyIfAbsent || oldValue == null) 58 e.value = value; 59 afterNodeAccess(e); 60 return oldValue; 61 } 62 } 63 ++modCount; 64 if (++size > threshold) 65 resize(); 66 afterNodeInsertion(evict); 67 return null; 68 }
不難理解,首先hash(key)獲得key的哈希值,這些值就是「數據結構」中數組的元素。根據hash值肯定當前元素的存儲格子,若是當前格子已經存儲了元素,那麼就以鏈表形式數組
存到這個格子中,新加入的放在鏈表的頭部,因此最先加入的就被排在了鏈尾部。如圖1中hash值爲1的位置,就以鏈表的形式存儲了兩個值。安全
每一次添加元素都會修改modCount全局變量,這是是用來檢測HashMap在迭代的過程當中是否發生告終構變化數據結構
同時這裏還會設計到擴容resize().詳見「HashMap負載因子」
源碼以下:
1 public V get(Object key) { 2 Node<K,V> e; 3 return (e = getNode(hash(key), key)) == null ? null : e.value; 4 } 5 6 /** 7 * Implements Map.get and related methods 8 * 9 * @param hash hash for key 10 * @param key the key 11 * @return the node, or null if none 12 */ 13 final Node<K,V> getNode(int hash, Object key) { 14 Node<K,V>[] tab; Node<K,V> first, e; int n; K k; 15 if ((tab = table) != null && (n = tab.length) > 0 && 16 (first = tab[(n - 1) & hash]) != null) { 17 if (first.hash == hash && // always check first node 18 ((k = first.key) == key || (key != null && key.equals(k)))) 19 return first; 20 if ((e = first.next) != null) { 21 if (first instanceof TreeNode) 22 return ((TreeNode<K,V>)first).getTreeNode(hash, key); 23 do { 24 if (e.hash == hash && 25 ((k = e.key) == key || (key != null && key.equals(k)))) 26 return e; 27 } while ((e = e.next) != null); 28 } 29 } 30 return null; 31 }
首先計算出key的hash值,找到數組中對應的下標,而後經過key的equals方法匹配出對應位置的鏈表中的元素,這就完成了元素get的操做。這也是爲何要成對複寫hashCode
和equals方法的緣由。由此可知HashMap的get操做,效率會有瓶頸,若是一個位置存放了大量的元素,因爲是鏈表存儲的,就須要挨個比對,才能找到匹配的值。
因此讓HashMap中的元素均勻的分佈,每一個位置上的鏈表只存儲一個元素那麼HashMap的效率就會很高。
建立一個HashMap時,有initialCapacity和loadFactor這兩個初始化參數,初始容量和負載因子。
這兩個參數的設置會決定一個HashMap結構是否合理。詳見「HashMap 負載因子」
這是整個HashMap最核心的方法了,由於get(),put()操做都會用到這個方法,用於定位。肯定新元素的存放位置。這關係着元素分佈是否合理。
1 /** 2 * Computes key.hashCode() and spreads (XORs) higher bits of hash 3 * to lower. Because the table uses power-of-two masking, sets of 4 * hashes that vary only in bits above the current mask will 5 * always collide. (Among known examples are sets of Float keys 6 * holding consecutive whole numbers in small tables.) So we 7 * apply a transform that spreads the impact of higher bits 8 * downward. There is a tradeoff between speed, utility, and 9 * quality of bit-spreading. Because many common sets of hashes 10 * are already reasonably distributed (so don't benefit from 11 * spreading), and because we use trees to handle large sets of 12 * collisions in bins, we just XOR some shifted bits in the 13 * cheapest possible way to reduce systematic lossage, as well as 14 * to incorporate impact of the highest bits that would otherwise 15 * never be used in index calculations because of table bounds. 16 */ 17 static final int hash(Object key) { 18 int h; 19 return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); 20 }
這與以往jdk中有些區別,至於爲何這樣計算,目前還在學習中。
HashMap線程不一樣步,採用fail-fast機制。多線程下,在迭代器中,若是有線程修改了HashMap結構,會拋出 Java.util.ConcurrentModificationException異常
學海無涯,學習是苦澀的.堅持,看了兩天了,還有不少不明白的地方,不放棄,持續更新。
謝謝一下同仁的分享
參考資料: