【重點,要考的】數據結構及算法基礎--哈希圖(HashMap)

HashMap能夠說是java中最多見的幾種集合了。java

在瞭解HashMap前咱們要先了解Object的兩個方法:Equals和hashCode()node

首先咱們來看一下object內的源碼是怎樣實現的:算法

hashcode():編程

/**
     * Returns a hash code value for the object. This method is
     * supported for the benefit of hash tables such as those provided by
     * {@link java.util.HashMap}.
     * <p>
     * The general contract of {@code hashCode} is:
     * <ul>
     * <li>Whenever it is invoked on the same object more than once during
     *     an execution of a Java application, the {@code hashCode} method
     *     must consistently return the same integer, provided no information
     *     used in {@code equals} comparisons on the object is modified.
     *     This integer need not remain consistent from one execution of an
     *     application to another execution of the same application.
     * <li>If two objects are equal according to the {@code equals(Object)}
     *     method, then calling the {@code hashCode} method on each of
     *     the two objects must produce the same integer result.
     * <li>It is <em>not</em> required that if two objects are unequal
     *     according to the {@link java.lang.Object#equals(java.lang.Object)}
     *     method, then calling the {@code hashCode} method on each of the
     *     two objects must produce distinct integer results.  However, the
     *     programmer should be aware that producing distinct integer results
     *     for unequal objects may improve the performance of hash tables.
     * </ul>
     * <p>
     * As much as is reasonably practical, the hashCode method defined by
     * class {@code Object} does return distinct integers for distinct
     * objects. (This is typically implemented by converting the internal
     * address of the object into an integer, but this implementation
     * technique is not required by the
     * Java&trade; programming language.)
     *
     * @return  a hash code value for this object.
     * @see     java.lang.Object#equals(java.lang.Object)
     * @see     java.lang.System#identityHashCode
     */
    public native int hashCode();

可是這個方法沒有實現!注意上面這句話:數組

but this implementation technique is not required by the Java&trade; programming language.
咱們不須要知道具體怎樣實現的hashCode的運行過程,咱們須要知道的是它返回這個對象的特定的類型爲整數的hashcode
equals():
/**
     * Indicates whether some other object is "equal to" this one.
     * <p>
     * The {@code equals} method implements an equivalence relation
     * on non-null object references:
     * <ul>
     * <li>It is <i>reflexive</i>: for any non-null reference value
     *     {@code x}, {@code x.equals(x)} should return
     *     {@code true}.
     * <li>It is <i>symmetric</i>: for any non-null reference values
     *     {@code x} and {@code y}, {@code x.equals(y)}
     *     should return {@code true} if and only if
     *     {@code y.equals(x)} returns {@code true}.
     * <li>It is <i>transitive</i>: for any non-null reference values
     *     {@code x}, {@code y}, and {@code z}, if
     *     {@code x.equals(y)} returns {@code true} and
     *     {@code y.equals(z)} returns {@code true}, then
     *     {@code x.equals(z)} should return {@code true}.
     * <li>It is <i>consistent</i>: for any non-null reference values
     *     {@code x} and {@code y}, multiple invocations of
     *     {@code x.equals(y)} consistently return {@code true}
     *     or consistently return {@code false}, provided no
     *     information used in {@code equals} comparisons on the
     *     objects is modified.
     * <li>For any non-null reference value {@code x},
     *     {@code x.equals(null)} should return {@code false}.
     * </ul>
     * <p>
     * The {@code equals} method for class {@code Object} implements
     * the most discriminating possible equivalence relation on objects;
     * that is, for any non-null reference values {@code x} and
     * {@code y}, this method returns {@code true} if and only
     * if {@code x} and {@code y} refer to the same object
     * ({@code x == y} has the value {@code true}).
     * <p>
     * Note that it is generally necessary to override the {@code hashCode}
     * method whenever this method is overridden, so as to maintain the
     * general contract for the {@code hashCode} method, which states
     * that equal objects must have equal hash codes.
     *
     * @param   obj   the reference object with which to compare.
     * @return  {@code true} if this object is the same as the obj
     *          argument; {@code false} otherwise.
     * @see     #hashCode()
     * @see     java.util.HashMap
     */
    public boolean equals(Object obj) {
        return (this == obj);
    }

這裏我將jdk源碼中全部相關信息都給出來了,但願在某些地方理解的時候,會提供必定的幫助。數據結構

 

固然咱們能夠重寫這兩個函數,可是在java1.8中定義的函數最好不要進行重寫,否則對hashmap的性能產生很大的影響;app

 

1)HashMap概述

HashMap是基於哈希表的map接口的非同步實現,此實現提供全部可選的映射操做,並容許使用null值和null鍵。此類不保證映射的順序,特別是它不保證該順序恆久不變。less

 

2)HashMap數據結構

在java語言編程中,最基本的數據結構就兩種:數組和引用,其餘全部的數據結構均可以經過這兩個基本的數據結構來實現,在jkd 1.7之前,hashmap就是一個鏈表散列的結構,可是在jdk1.8發佈後,hashmap的鏈表長度大於必定值事後,變編程紅黑樹,關於紅黑樹的概念,在上篇文章中進行了講解:ide

java中採用的即是鏈地址法,即是每一個數組元素上都是一個鏈表。當數據被hash後,獲得數組下標,將數據放在對應數組下標的鏈表上函數

其中每一個元素都用node節點表示:

static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        V value;
        Node<K,V> next;

        Node(int hash, K key, V value, Node<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }
}

node是hashmap的一個內部類,用來儲存數據和保持鏈表結構的。它的本質就是一個映射(鍵值對)。

固然,會產生兩個key值產生同一個位置,(最主要的即是由於index的產生原理,固然也有多是產生了同樣的hash值)這種狀況叫哈希碰撞。固然hash算法計算結果越分散均勻,發生hash碰撞的機率就越小,map的存儲效率就越高。

hashmap中又一個很重要的字段就是Node[] table。如上圖所示,這就是hashmap的基本結構,構成鏈表的數組。

若是哈希桶數組很大,即便較差的Hash算法也會比較分散,若是哈希桶數組數組很小,即便好的Hash算法也會出現較多碰撞,因此就須要在空間成本和時間成本之間權衡,其實就是在根據實際狀況肯定哈希桶數組的大小,並在此基礎上設計好的hash算法減小Hash碰撞。那麼經過什麼方式來控制map使得Hash碰撞的機率又小,哈希桶數組(Node[] table)佔用空間又少呢?答案就是好的Hash算法和擴容機制。

在此以前,咱們先來了解一下hashmap一些很是很是重要的參數。源代碼中以下:

/**
     * The default initial capacity - MUST be a power of two.
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * The maximum capacity, used if a higher value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two <= 1<<30.
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The load factor used when none specified in constructor.
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     */
    static final int MIN_TREEIFY_CAPACITY = 64;
    
    transient int size;

    /**
     * The number of times this HashMap has been structurally modified
     * Structural modifications are those that change the number of mappings in
     * the HashMap or otherwise modify its internal structure (e.g.,
     * rehash).  This field is used to make iterators on Collection-views of
     * the HashMap fail-fast.  (See ConcurrentModificationException).
     */
    transient int modCount;

    /**
     * The next size value at which to resize (capacity * load factor).
     *
     * @serial
     */
    // (The javadoc description is true upon serialization.
    // Additionally, if the table array has not been allocated, this
    // field holds the initial array capacity, or zero signifying
    // DEFAULT_INITIAL_CAPACITY.)
    int threshold;

    /**
     * The load factor for the hash table.
     *
     * @serial
     */
    final float loadFactor;
    /**
     * The number of key-value mappings contained in this map.
     */
上面這些參數的是很是很是重要的,其重要性至關於hashmap的數據結構的重要性。在本篇中,咱們運用到並重點講解的爲一下幾個參數
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
static final float DEFAULT_LOAD_FACTOR = 0.75f;
transient int size;
transient int modCount;
int threshold;
final float loadFactor;

首先能夠單刀,Node[] table的默認長度是16,loadFactor的默認大小爲0.75,threshold是hashmap所能容納的最大數據量的Node個數,默認爲0.75,threshold=DEFAULT_INITIAL_CAPACITY*loadFactor;當添加元素數量超過這個數量事後,就要進行擴容,擴容後hashmap的容量是以前的兩倍。對於0.75,建議你們不要輕易修改。除非在時間和空間比較特殊的狀況下,若是內存空間不少而又對時間效率要求很高,能夠下降負載因子Load factor的值;相反,若是內存空間緊張而對時間效率要求不高,能夠增長負載因子loadFactor的值,這個值能夠大於1。

size就是在這個hashmpa中實際存在的node數量。modCount即是hashmap結構修改的次數。在以前對iterator(迭代器)進行講解的時候我已經進行了說明,須要注意的是在hashmap中modcount指的是結構更改的次數,例如添加新的node,可是若是是替換原有node的value,modcount是不變的,由於它不屬於結構變化。

有興趣能夠了解下:在HashMap中,哈希桶數組table的長度length大小必須爲2的n次方(必定是合數),這是一種很是規的設計,常規的設計是把桶的大小設計爲素數。相對來講素數致使衝突的機率要小於合數,具體證實能夠參考http://blog.csdn.net/liuqiyao_01/article/details/14475159,Hashtable初始化桶大小爲11,就是桶大小設計爲素數的應用(Hashtable擴容後不能保證仍是素數)。HashMap採用這種很是規設計,主要是爲了在取模和擴容時作優化,同時爲了減小衝突,HashMap定位哈希桶索引位置時,也加入了高位參與運算的過程。

 

3)確認hashmap索引位置

代碼:

/**
     * Computes key.hashCode() and spreads (XORs) higher bits of hash
     * to lower.  Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     */
    static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }

這裏的Hash算法本質上就是三步:取key的hashCode值、高位運算、取模運算。

對於任意給定的對象,只要它的hashCode()返回值相同,那麼程序調用方法一所計算獲得的Hash碼值老是相同的。咱們首先想到的就是把hash值對數組長度取模運算,這樣一來,元素的分佈相對來講是比較均勻的。可是,模運算的消耗仍是比較大的,在HashMap中是這樣作的:咱們經過h & (table.length -1)來計算該對象應該保存在table數組的哪一個索引處。

這個方法很是巧妙,它經過h & (table.length -1)來獲得該對象的保存位,而HashMap底層數組的長度老是2的n次方,這是HashMap在速度上的優化。當length老是2的n次方時,h& (length-1)運算等價於對length取模,也就是h%length,可是&比%具備更高的效率。

在JDK1.8的實現中,優化了高位運算的算法,經過hashCode()的高16位異或低16位實現的:(h = k.hashCode()) ^ (h >>> 16),主要是從速度、功效、質量來考慮的,這麼作能夠在數組table的length比較小的時候,也能保證考慮到高低Bit都參與到Hash的計算中,同時不會有太大的開銷。

咱們舉個栗子:

大概的獲得索引的流程就是上面所示。

 

4)hashmap的put實現方法(劃重點,要考):

put函數大體的思路爲:

  1. 對key的hashCode()作hash,而後再計算index;
  2. 若是沒碰撞直接放到bucket裏;
  3. 若是碰撞了,以鏈表的形式存在buckets後;
  4. 若是碰撞致使鏈表過長(大於等於TREEIFY_THRESHOLD),就把鏈表轉換成紅黑樹;
  5. 若是節點已經存在就替換old value(保證key的惟一性)
  6. 若是bucket滿了(超過load factor*current capacity),就要resize。

代碼以下:

public V put(K key, V value) {
        return putVal(hash(key), key, value, false, true);
    }

    /**
     * Implements Map.put and related methods
     *
     * @param hash hash for key
     * @param key the key
     * @param value the value to put
     * @param onlyIfAbsent if true, don't change existing value
     * @param evict if false, the table is in creation mode.
     * @return previous value, or null if none
     */
    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
        Node<K,V>[] tab; Node<K,V> p; int n, i;
        if ((tab = table) == null || (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((p = tab[i = (n - 1) & hash]) == null)
            tab[i] = newNode(hash, key, value, null);
        else {
            Node<K,V> e; K k;
            if (p.hash == hash &&
                ((k = p.key) == key || (key != null && key.equals(k))))
                e = p;
            else if (p instanceof TreeNode)
                e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
            else {
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {
                        p.next = newNode(hash, key, value, null);
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                            treeifyBin(tab, hash);
                        break;
                    }
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        break;
                    p = e;
                }
            }
            if (e != null) { // existing mapping for key
                V oldValue = e.value;
                if (!onlyIfAbsent || oldValue == null)
                    e.value = value;
                afterNodeAccess(e);
                return oldValue;
            }
        }
        ++modCount;
        if (++size > threshold)
            resize();
        afterNodeInsertion(evict);
        return null;
    }

5)hashmap的get方法:

思路以下:

  1. bucket裏的第一個節點,直接命中;
  2. 若是有衝突,則經過key.equals(k)去查找對應的entry 
    若爲樹,則在樹中經過key.equals(k)查找,O(logn); 
    若爲鏈表,則在鏈表中經過key.equals(k)查找,O(n)。

代碼以下:

public V get(Object key) {
        Node<K,V> e;
        return (e = getNode(hash(key), key)) == null ? null : e.value;
    }

    /**
     * Implements Map.get and related methods
     *
     * @param hash hash for key
     * @param key the key
     * @return the node, or null if none
     */
    final Node<K,V> getNode(int hash, Object key) {
        Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
        if ((tab = table) != null && (n = tab.length) > 0 &&
            (first = tab[(n - 1) & hash]) != null) {
            if (first.hash == hash && // always check first node
                ((k = first.key) == key || (key != null && key.equals(k))))
                return first;
            if ((e = first.next) != null) {
                if (first instanceof TreeNode)
                    return ((TreeNode<K,V>)first).getTreeNode(hash, key);
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        return e;
                } while ((e = e.next) != null);
            }
        }
        return null;
    }

注意(重點,要考的):上述put的思路從putval的方法中是正確的,可是若是將putval方法打碎了分析,這個思路是不徹底的,這就涉及到了hashmap的擴容機制,我會在下一篇hashmap的講解中來具體講解,putval在不一樣狀況下是怎麼運行的,以及擴容機制中最重要的函數,resize();

jdk1.8中對hashmap有着很是棒的擴容機制,咱們在上一篇文章提到了當鏈表長度大於某個值的時候,hashmap中的鏈表會變成紅黑樹結構,可是實際上真的是這樣麼?咱們來看一下樹化的函數是怎樣進行的:

final void treeifyBin(Node<K,V>[] tab, int hash) {
        int n, index; Node<K,V> e;
        if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
            resize();
        else if ((e = tab[index = (n - 1) & hash]) != null) {
            TreeNode<K,V> hd = null, tl = null;
            do {
                TreeNode<K,V> p = replacementTreeNode(e, null);
                if (tl == null)
                    hd = p;
                else {
                    p.prev = tl;
                    tl.next = p;
                }
                tl = p;
            } while ((e = e.next) != null);
            if ((tab[index] = hd) != null)
                hd.treeify(tab);
        }
    }

咱們從第一個判斷語句就發現,若是hashmap中table的長度小於64(MIN_TREEIFY_CAPACITY)的時候,實際上是不會進行樹化的,而是對這個hashmap進行擴容。因此咱們發現,擴容不只僅用於node的個數超過threshold的時候。

這個樹化函數的設計即是想保持算法設計中的相對較好。

要了解擴容機制,咱們先來看看jdk1.7是怎麼設計的,由於我用的是jdk1.8,因此一下代碼是從網上摘取,若是和源碼有區別,請各位告知:

void resize(int newCapacity) {   //傳入新的容量  
    Entry[] oldTable = table;    //引用擴容前的Entry數組  
    int oldCapacity = oldTable.length;  
    if (oldCapacity == MAXIMUM_CAPACITY) {  //擴容前的數組大小若是已經達到最大(2^30)了  
        threshold = Integer.MAX_VALUE; //修改閾值爲int的最大值(2^31-1),這樣之後就不會擴容了  
        return;  
    }  
  
    Entry[] newTable = new Entry[newCapacity];  //初始化一個新的Entry數組  
    transfer(newTable);                         //!!將數據轉移到新的Entry數組裏  
    table = newTable;                           //HashMap的table屬性引用新的Entry數組  
    threshold = (int) (newCapacity * loadFactor);//修改閾值  
}

其中transfer方法以下:

void transfer(Entry[] newTable) {  
    Entry[] src = table;                   //src引用了舊的Entry數組  
    int newCapacity = newTable.length;  
    for (int j = 0; j < src.length; j++) { //遍歷舊的Entry數組  
        Entry<K, V> e = src[j];             //取得舊Entry數組的每一個元素  
        if (e != null) {  
            src[j] = null;//釋放舊Entry數組的對象引用(for循環後,舊的Entry數組再也不引用任何對象)  
            do {  
                Entry<K, V> next = e.next;  
                int i = indexFor(e.hash, newCapacity); //!!從新計算每一個元素在數組中的位置  
                e.next = newTable[i]; //標記[1]  
                newTable[i] = e;      //將元素放在數組上  
                e = next;             //訪問下一個Entry鏈上的元素  
            } while (e != null);  
        }  
    }  
}

咱們經過上面代碼能夠知道,咱們實際上是遍歷這個鏈表,而後將新的元素位置從頭位置插入。這樣咱們能夠知道,咱們鏈表中的前後順序是會改變的。先後順序會反過來。下圖能夠很明白的開出這種變換關係:

那麼,關於jdk1.8,咱們作了哪些優化呢?

(重點,要考)咱們必定要先明確一個很重要的東西!!jdk1.8的table長度必定是2的冪!!

也就是說在jdk1.8中 resize()必定是擴大兩倍的容量

 

jdk1.8中的索引和1.7的原則是同樣的,都採用的是:h & (length - 1)做爲node的索引

若是咱們擴展長度爲兩倍,那麼做爲length-1就是尾端爲一串1,其他爲0的位序列。

那麼位運算能夠獲得下圖:

圖a是擴展前產生的index,圖二爲擴展兩倍容量的index,java1.8很巧妙的運用擴展2倍產生index這一點,咱們直接判斷hash值在位中,比n-1高一位的比特是1仍是0來移動:

這就是上圖中,紅點標出的比特位便成了一種標誌,咱們經過判斷它爲0爲1來進行擴容操做。紅圈的16不是定值,而是原hashmap的table的長度。

上面的例子,也說明,咱們table長度只有16的時候,有很大的狀況可以讓index相同,可是擴容後又不在擁有相同的index。

這個設計確實很是的巧妙,既省去了從新計算hash值的時間,並且同時,因爲新增的1bit是0仍是1能夠認爲是隨機的,所以resize的過程,均勻的把以前的衝突的節點分散到新的bucket了。這一塊就是JDK1.8新增的優化點。有一點注意區別,JDK1.7中rehash的時候,舊鏈表遷移新鏈表的時候,若是在新表的數組索引位置相同,則鏈表元素會倒置,可是從上圖能夠看出,JDK1.8不會倒置,這一點正如以前的代碼所示。

咱們能夠用一張圖略微表示一下,下圖中藍色爲新增的index位爲0,綠色的表示1:

固然,jdk1.8的resize代碼複雜了不少,雖然你們都說它寫的很好,我仍是在判斷語句的執行中有不少疑惑,感受不少判斷語句都是相互包含的。具體的我還要繼續學習一下,可是jdk1.8中的resize()流程仍是很清晰的,怎麼擴展,怎麼移動鏈表,代碼都很棒的:

final Node<K,V>[] resize() {
        Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        if (oldCap > 0) {
            if (oldCap >= MAXIMUM_CAPACITY) {
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        else {               // zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        threshold = newThr;
        @SuppressWarnings({"rawtypes","unchecked"})
            Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        table = newTab;
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {
                Node<K,V> e;
                if ((e = oldTab[j]) != null) {
                    oldTab[j] = null;
                    if (e.next == null)
                        newTab[e.hash & (newCap - 1)] = e;
                    else if (e instanceof TreeNode)
                        ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                    else { // preserve order
                        Node<K,V> loHead = null, loTail = null;
                        Node<K,V> hiHead = null, hiTail = null;
                        Node<K,V> next;
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }
                        } while ((e = next) != null);
                        if (loTail != null) {
                            loTail.next = null;
                            newTab[j] = loHead;
                        }
                        if (hiTail != null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }

 

其實說了這麼多,hashmap若是隻是運用的話,咱們只須要了解她的基礎函數和結構便可,可是我相信對hashmap的原理有了解確定能增強對它理解和應用,對不一樣狀況的使用也有理解。

固然,我仍是那句話,源碼必定是最好的老師。

一次記不住,多看10幾遍。

相關文章
相關標籤/搜索