Listhtml
- LinkedList - ArrayList - Vector - Stack
Queuejava
- PriorityQueue - Deque - ArrayDeque
Setnode
- HashSet - LinkedHashSet - TreeSet
HashMap編程
- LinkedHashMap - TreeMap
先看字段聲明bootstrap
transient int size = 0; /** * Pointer to first node. * Invariant: (first == null && last == null) || * (first.prev == null && first.item != null) */ transient Node<E> first; /** * Pointer to last node. * Invariant: (first == null && last == null) || * (last.next == null && last.item != null) */ transient Node<E> last;
咱們看到有三個字段,分別是size、first、last,命名和註釋已很是簡單明顯,不難看出這是一個雙向鏈表。
注意到這三個字段都有個修飾關鍵字transient,這是比較不常見的,這是與類序列化相關的內容,爲了避免讓這篇將集合的文章太跳,將會在寫完LinkedList相關的內容後再講述。
最核心內容,LinkedList的信息儲存單元是一個內部類Node。數組
三個字段分別是前驅結點,後繼結點、節點內容,沒什麼特別的。緩存
private static class Node<E> { E item; Node<E> next; Node<E> prev; Node(Node<E> prev, E element, Node<E> next) { this.item = element; this.next = next; this.prev = prev; } }
關於對鏈表的增刪結點,獲取結點,更換結點等操做比較簡單,下面只挑在某個結點前插入一個結點的操做進行講述:安全
/** * Inserts element e before non-null Node succ. */ //非public方法,public void add(int index, E element)爲上層方法 void linkBefore(E e, Node<E> succ) {//在succ結點前加入以e爲值的結點 // assert succ != null; final Node<E> pred = succ.prev; //1.將e構形成結點,後繼結點爲succ,前驅結點爲succ的前驅結點,這用在e結點的角度就已經加入到鏈表中succ前面的位置了,但此時e結點的先後結點指針還未指向e final Node<E> newNode = new Node<>(pred, e, succ); //2.將e後面的結點即succ的前驅指向e succ.prev = newNode; //3.將e前面的結點的後繼指向e,若e此時爲第一個結點則將first指針指向e if (pred == null) first = newNode; else pred.next = newNode; //4.鏈表容量增長1 size++; //5.鏈表修改次數記錄加1 modCount++; }
經過了解在某個結點前插入一個結點這個操做的實現,不難發現鏈表一個兩個很是重要的特色,一是方便動態增刪結點,只須要調整鏈表局部位置的結點指向,二是隨機查詢速度較慢,由於須要從頭結點一直向前查詢。
注意到,其實會對鏈表結構發生改變的每個操做,鏈表都會將修改次數記錄modCount加1,那這個modCount的做用是什麼呢?實際上是用於輔助迭代器正常使用的。數據結構
迭代器簡單來講就是用來對集合的元素進行遍歷操做的。調用集合的iterator()或listIterator()方法將實例出從第一個結點開始的迭代器,也能夠傳入int參數做爲第一個迭代的結點。app
private class ListItr implements ListIterator<E> { private Node<E> lastReturned; private Node<E> next; private int nextIndex; private int expectedModCount = modCount; ListItr(int index) { // assert isPositionIndex(index); next = (index == size) ? null : node(index); nextIndex = index; } public boolean hasNext() { return nextIndex < size; } public E next() { checkForComodification(); if (!hasNext()) throw new NoSuchElementException(); lastReturned = next; next = next.next; nextIndex++; return lastReturned.item; } public boolean hasPrevious() { return nextIndex > 0; } public E previous() { checkForComodification(); if (!hasPrevious()) throw new NoSuchElementException(); lastReturned = next = (next == null) ? last : next.prev; nextIndex--; return lastReturned.item; } public int nextIndex() { return nextIndex; } public int previousIndex() { return nextIndex - 1; } public void remove() { checkForComodification(); if (lastReturned == null) throw new IllegalStateException(); Node<E> lastNext = lastReturned.next; unlink(lastReturned); if (next == lastReturned) next = lastNext; else nextIndex--; lastReturned = null; expectedModCount++; } public void set(E e) { if (lastReturned == null) throw new IllegalStateException(); checkForComodification(); lastReturned.item = e; } public void add(E e) { checkForComodification(); lastReturned = null; if (next == null) linkLast(e); else linkBefore(e, next); nextIndex++; expectedModCount++; } public void forEachRemaining(Consumer<? super E> action) { Objects.requireNonNull(action); while (modCount == expectedModCount && nextIndex < size) { action.accept(next.item); lastReturned = next; next = next.next; nextIndex++; } checkForComodification(); } final void checkForComodification() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); } }
從內部類ListItr看到,LinkedList的迭代器能夠雙向進行迭代的,迭代過程當中只能使用迭代器的add()、set()、remove()對鏈表進行修改,爲何重點標出只能,由於不少新手很容易寫出這樣的代碼:
//雖然沒有顯式地使用迭代器,但其實底層實現也是使用迭代器進行迭代 for (Object o : linkedList) { if(o.equals(something)){ linkedList.remove(o); } }
這樣是會報錯的,經過註釋,咱們能夠看到這是在迭代過程當中是不容許直接修改鏈表的結構的,fail-fast機制,能夠看到,在迭代器的代碼中,有個非public方法checkForComodification(),迭代器中幾乎每一個操做都會調用一下該方法,而該方法的方法體內僅僅作的一件事就是檢查鏈表的modCount是否等於迭代器expectedModCount,不相等將拋出ConcurrentModificationException,從而實現不容許在迭代過程當中直接修改鏈表結構,至於爲何要這樣作則自行研究上述迭代的代碼看若是在迭代過程當中修改了鏈表結構會有什麼錯誤發生。所以,正確的使用迭代器刪除元素應該是像下面這樣:
while (iterator.hasNext()){ Object o = iterator.next(); if(o.equals(something)){ iterator.remove(); } }
從LinkedList的迭代器代碼中能夠看到一個方法forEachRemaining(),咱們查看Iterator接口的描述:
/** * Performs the given action for each remaining element until all elements * have been processed or the action throws an exception. Actions are * performed in the order of iteration, if that order is specified. * Exceptions thrown by the action are relayed to the caller. * * @implSpec * <p>The default implementation behaves as if: * <pre>{@code * while (hasNext()) * action.accept(next()); * }</pre> * * @param action The action to be performed for each element * @throws NullPointerException if the specified action is null * @since 1.8 */ default void forEachRemaining(Consumer<? super E> action) { Objects.requireNonNull(action); while (hasNext()) action.accept(next()); }
看到該方法是在jdk1.8版本後才加入的,用於實現對集合每一個元素作特定操做,傳入參數爲Consumer接口,Consumer接口是一個只有一個方法須要實現的接口,因此不難看出其實該方法是用於配合lambda來使用的,以此更加簡化java的表達,舉例:
iterator.forEachRemaining(System.out::println); iterator.forEachRemaining(item -> { System.out.println(item); });
繼續看Linked的源代碼,咱們看到一個Spliterator,這是jdk1.8才加入的迭代器,該迭代器其實就是可分割的迭代器,可分割,意味着能夠將迭代過程分給不一樣的線程去完成,從而提升效率。爲了方便,源碼分析將在下述代碼以註釋方式進行:
@Override public Spliterator<E> spliterator() { return new LLSpliterator<E>(this, -1, 0); } /** A customized variant of Spliterators.IteratorSpliterator */ //變種的IteratorSpliterator,區別:IteratorSpliterator使用interator進行迭代,LLSpliterator直接使用Node的next指針迭代,原則上迭代速度更快 static final class LLSpliterator<E> implements Spliterator<E> { static final int BATCH_UNIT = 1 << 10; // batch array size increment;分割的長度增長單位,每分割一次下次分割長度增長1024 static final int MAX_BATCH = 1 << 25; // max batch array size;最大分割長度,大於2^25分割長度將再也不增長 final LinkedList<E> list; // null OK unless traversed Node<E> current; // current node; null until initialized int est; // size estimate; -1 until first needed int expectedModCount; // initialized when est set int batch; // batch size for splits;當前分割長度 LLSpliterator(LinkedList<E> list, int est, int expectedModCount) { this.list = list; this.est = est; this.expectedModCount = expectedModCount; } final int getEst() { int s; // force initialization final LinkedList<E> lst; if ((s = est) < 0) { if ((lst = list) == null) s = est = 0; else { expectedModCount = lst.modCount; current = lst.first; s = est = lst.size; } } return s; } public long estimateSize() { return (long) getEst(); } //分割出batch長度的Spliterator public Spliterator<E> trySplit() { Node<E> p; int s = getEst(); if (s > 1 && (p = current) != null) { //每次分割長度增長BATCH_UNIT,達到MAX_BATCH便再也不增長 int n = batch + BATCH_UNIT; if (n > s) n = s; if (n > MAX_BATCH) n = MAX_BATCH; //將須要分割的元素生成數組 Object[] a = new Object[n]; int j = 0; do { a[j++] = p.item; } while ((p = p.next) != null && j < n); current = p; batch = j; est = s - j; //返回新的Spliterator,注意:新的Spliterator爲ArraySpliterator類型,實現上有所區別,ArraySpliterator每次分割成一半一半,而IteratorSpliterator算術遞增 return Spliterators.spliterator(a, 0, j, Spliterator.ORDERED); } return null; } //遍歷當前迭代器中全部元素並對獲取元素值進行action操做(操做全部元素) public void forEachRemaining(Consumer<? super E> action) { Node<E> p; int n; if (action == null) throw new NullPointerException(); if ((n = getEst()) > 0 && (p = current) != null) { current = null; est = 0; do { E e = p.item; p = p.next; action.accept(e); } while (p != null && --n > 0); } if (list.modCount != expectedModCount) throw new ConcurrentModificationException(); } //對當前迭代元素的元素值進行action操做(只操做一個元素) public boolean tryAdvance(Consumer<? super E> action) { Node<E> p; if (action == null) throw new NullPointerException(); if (getEst() > 0 && (p = current) != null) { --est; E e = p.item; current = p.next; action.accept(e); if (list.modCount != expectedModCount) throw new ConcurrentModificationException(); return true; } return false; } public int characteristics() { return Spliterator.ORDERED | Spliterator.SIZED | Spliterator.SUBSIZED; } }
來自百度百科的解釋:
java語言的關鍵字,變量修飾符,若是用transient聲明一個實例變量,當對象存儲時,它的值不須要維持。換句話來講就是,用transient關鍵字標記的成員變量不參與序列化過程。
解釋了跟沒解釋差很少。好,其實對象存儲就是一個對象序列化的過程,下面提供一個程序以便更好的理解:
package Serializable; import java.io.*; public class SerializableTest { private static void testSerializable(AbstractClass cl) throws IOException { //依次讀取對象的各個字段值 System.out.printf("Minor version number: %d%n", cl.getMinorVer()); System.out.printf("Major version number: %d%n", cl.getMajorVer()); cl.showString(); System.out.println(); //將對象寫入硬盤 File file = new File("resource/ObjectRecords.txt"); if (!file.exists()) { file.createNewFile(); } try (FileOutputStream fos = new FileOutputStream(file); ObjectOutputStream oos = new ObjectOutputStream(fos)) { oos.writeObject(cl); } //置空對象的引用 cl = null; //將對象從新從硬盤讀回 try (FileInputStream fis = new FileInputStream("resource/ObjectRecords.txt"); ObjectInputStream ois = new ObjectInputStream(fis)) { cl = (AbstractClass) ois.readObject(); //依次讀取反序列化後的對象的各個字段值 System.out.printf("Minor version number: %d%n", cl.getMinorVer()); System.out.printf("Major version number: %d%n", cl.getMajorVer()); cl.showString(); System.out.println(); } catch (ClassNotFoundException cnfe) { System.err.println(cnfe.getMessage()); } } public static void main(String[] args) throws IOException { ClassSerializable cl1 = new ClassSerializable("string"); testSerializable(cl1); ClassAllSerializable cl2 = new ClassAllSerializable("string"); testSerializable(cl2); ClassNotSerializable cl3 = new ClassNotSerializable("string"); testSerializable(cl3); } }
從main方法能夠看到這裏用來測試的有三個類,分別是ClassSerializable、ClassAllSerializable、ClassNotSerializable,其中前兩個實現了Serializable接口,表明這兩個類是能夠進行序列化的,因此前兩個類的對象在進行writeObject的時候不會報錯,而ClassNotSerializable則拋出java.io.NotSerializableException: Serializable.ClassNotSerializable,而前兩個類區別在於ClassSerializable全部字段均帶有Transient關鍵字,而ClassAllSerializable沒有,如下是程序輸出結果:
ClassSerializable called Minor version number: 1 Major version number: 2 string Minor version number: 0 Major version number: 0 null ClassAllSerializable called Minor version number: 1 Major version number: 2 string Minor version number: 1 Major version number: 2 string ClassNotSerializable called Minor version number: 1 Major version number: 2 string Exception in thread "main" java.io.NotSerializableException: Serializable.ClassNotSerializable at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) at Serializable.SerializableTest.testSerializable(SerializableTest.java:19) at Serializable.SerializableTest.main(SerializableTest.java:42)
能夠明顯看到,被transient修飾的字段通過對象的序列化和反序列化後沒有被保存起來。
先看字段聲明進行初步分析:
/** * Default initial capacity. */ private static final int DEFAULT_CAPACITY = 10; /** * Shared empty array instance used for empty instances. */ private static final Object[] EMPTY_ELEMENTDATA = {}; /** * Shared empty array instance used for default sized empty instances. We * distinguish this from EMPTY_ELEMENTDATA to know how much to inflate when * first element is added. */ private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {}; /** * The array buffer into which the elements of the ArrayList are stored. * The capacity of the ArrayList is the length of this array buffer. Any * empty ArrayList with elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA * will be expanded to DEFAULT_CAPACITY when the first element is added. */ transient Object[] elementData; // non-private to simplify nested class access /** * The size of the ArrayList (the number of elements it contains). * * @serial */ private int size;
顯然,elementData是用來存儲元素的,也就是說ArrayList底層由數組維護的。
咱們都知道,數組的大小初始化以後就是固定的,而數組表的元素是須要進行增刪操做的,那麼ArrayList是如何實現改變大小的呢?
不難想象,當進行add()操做的時候須要進行擴容:
public boolean add(E e) { ensureCapacityInternal(size + 1); // Increments modCount!! elementData[size++] = e; return true; } public boolean addAll(Collection<? extends E> c) { Object[] a = c.toArray(); int numNew = a.length; ensureCapacityInternal(size + numNew); // Increments modCount System.arraycopy(a, 0, elementData, size, numNew); size += numNew; return numNew != 0; } private void ensureCapacityInternal(int minCapacity) { if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) { minCapacity = Math.max(DEFAULT_CAPACITY, minCapacity); } ensureExplicitCapacity(minCapacity); } private void ensureExplicitCapacity(int minCapacity) { modCount++; // overflow-conscious code if (minCapacity - elementData.length > 0) grow(minCapacity); } private void grow(int minCapacity) { // overflow-conscious code int oldCapacity = elementData.length; int newCapacity = oldCapacity + (oldCapacity >> 1); if (newCapacity - minCapacity < 0) newCapacity = minCapacity; if (newCapacity - MAX_ARRAY_SIZE > 0) newCapacity = hugeCapacity(minCapacity); // minCapacity is usually close to size, so this is a win: elementData = Arrays.copyOf(elementData, newCapacity); }
能夠看到,重點是grow()方法,咱們分析一下grow()的行爲,參數變量minCapacity其實只是一個參考大小,一般爲當前大小加新增元素個數,grow首要考慮是將容量增長兩倍,若此時minCapacity更大的話才考慮取minCapacity,最後考慮值若比MAX_ARRAY_SIZE還要大則只能儘量大進行擴容,最後使用Arrays.copyOf()進行新建數組後複製。
其實Arraylist的全部add和remove操做都是基於Arrays.copyOf()進行的,此時,ArrayList相較於LinkedList的特色很明顯了,一是由於其底層是數組,因此ArrayList很是擅長與隨機讀取,二是由於基於Arrays.copyOf()實現的緣由,ArrayList增刪元素效率很低,並且致使內存佔用增長,提升GC觸發的概率。
分別是foreach、removeIf、replaceAll、sort,其參數都可使用lumbda表達式,這四個方法來自不一樣的接口,其實LinkedList也有這幾個方法,不過LinkedList均使用的是上級接口的default實現,而ArrayList則對其進行覆蓋了,下面將在代碼中增長註釋加以分析:
@Override //default使用迭代器迭代,下面實現原理同樣,簡化檢查過程 public void forEach(Consumer<? super E> action) { Objects.requireNonNull(action); final int expectedModCount = modCount; @SuppressWarnings("unchecked") final E[] elementData = (E[]) this.elementData; final int size = this.size; for (int i=0; modCount == expectedModCount && i < size; i++) { action.accept(elementData[i]); } if (modCount != expectedModCount) { throw new ConcurrentModificationException(); } } @Override //default使用迭代器迭代,後知足條件進行remove(),上面也講了ArrayList隨機增減元素效率很低,因此default的實現是絕對不可取的,下面實現的思想實際上是吧須要刪除的元素序號記錄下來,而後跳過這些元素把剩餘元素按順序排回this.elementData中 public boolean removeIf(Predicate<? super E> filter) { Objects.requireNonNull(filter); // figure out which elements are to be removed // any exception thrown from the filter predicate at this stage // will leave the collection unmodified int removeCount = 0; final BitSet removeSet = new BitSet(size); final int expectedModCount = modCount; final int size = this.size; for (int i=0; modCount == expectedModCount && i < size; i++) { @SuppressWarnings("unchecked") final E element = (E) elementData[i]; if (filter.test(element)) { removeSet.set(i); removeCount++; } } if (modCount != expectedModCount) { throw new ConcurrentModificationException(); } // shift surviving elements left over the spaces left by removed elements final boolean anyToRemove = removeCount > 0; if (anyToRemove) { final int newSize = size - removeCount; for (int i=0, j=0; (i < size) && (j < newSize); i++, j++) { i = removeSet.nextClearBit(i); elementData[j] = elementData[i]; } for (int k=newSize; k < size; k++) { elementData[k] = null; // Let gc do its work } this.size = newSize; if (modCount != expectedModCount) { throw new ConcurrentModificationException(); } modCount++; } return anyToRemove; } @Override @SuppressWarnings("unchecked") //default使用迭代器迭代,下面實現原理同樣,簡化檢查過程 public void replaceAll(UnaryOperator<E> operator) { Objects.requireNonNull(operator); final int expectedModCount = modCount; final int size = this.size; for (int i=0; modCount == expectedModCount && i < size; i++) { elementData[i] = operator.apply((E) elementData[i]); } if (modCount != expectedModCount) { throw new ConcurrentModificationException(); } modCount++; } @Override @SuppressWarnings("unchecked") //default方法先將集合用toArray()轉換成數組再用Arrays.sort,而這對於ArrayList來講確定是多餘的,由於ArrayList的元素容器就是數組 public void sort(Comparator<? super E> c) { final int expectedModCount = modCount; Arrays.sort((E[]) elementData, 0, size, c); if (modCount != expectedModCount) { throw new ConcurrentModificationException(); } modCount++; }
簡單翻看了Vector的源碼和註釋,發現它在jdk1.0就存在的,而ArrayList、LinkedList這種都是jdk1.2才增長的,而後在jdk1.2版本的時候稍微改造增長了對List的實現,而其大部份內容和ArrayList沒什麼大的差異,只是對public方法加上了synchronized關鍵字,也就是說jdk1.2之後Vector其實至關因而線程安全的ArrayList,這一點註釋上也有說起:
/** ... * <p>As of the Java 2 platform v1.2, this class was retrofitted to * implement the {@link List} interface, making it a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. Unlike the new collection * implementations, {@code Vector} is synchronized. If a thread-safe * implementation is not needed, it is recommended to use {@link * ArrayList} in place of {@code Vector}. ... */
而Stack,繼承自Vector,在它之上增長了棧的操做(push、pop)。
在註釋中咱們也注意到,對棧這種後進先出的操做也在雙端隊列Deque接口的實現中提供,例子給出的是ArrayDeque,關於隊列及雙端隊列咱們後續在講:
/** ... * <p>A more complete and consistent set of LIFO stack operations is * provided by the {@link Deque} interface and its implementations, which * should be used in preference to this class. For example: * <pre> {@code * Deque<Integer> stack = new ArrayDeque<Integer>();}</pre> ... */
咱們先看Queue接口方法:
public interface Queue<E> extends Collection<E> { /** * Inserts the specified element into this queue if it is possible to do so * immediately without violating capacity restrictions, returning * {@code true} upon success and throwing an {@code IllegalStateException} * if no space is currently available. * * @param e the element to add * @return {@code true} (as specified by {@link Collection#add}) * @throws IllegalStateException if the element cannot be added at this * time due to capacity restrictions * @throws ClassCastException if the class of the specified element * prevents it from being added to this queue * @throws NullPointerException if the specified element is null and * this queue does not permit null elements * @throws IllegalArgumentException if some property of this element * prevents it from being added to this queue */ boolean add(E e); /** * Inserts the specified element into this queue if it is possible to do * so immediately without violating capacity restrictions. * When using a capacity-restricted queue, this method is generally * preferable to {@link #add}, which can fail to insert an element only * by throwing an exception. * * @param e the element to add * @return {@code true} if the element was added to this queue, else * {@code false} * @throws ClassCastException if the class of the specified element * prevents it from being added to this queue * @throws NullPointerException if the specified element is null and * this queue does not permit null elements * @throws IllegalArgumentException if some property of this element * prevents it from being added to this queue */ boolean offer(E e); /** * Retrieves and removes the head of this queue. This method differs * from {@link #poll poll} only in that it throws an exception if this * queue is empty. * * @return the head of this queue * @throws NoSuchElementException if this queue is empty */ E remove(); /** * Retrieves and removes the head of this queue, * or returns {@code null} if this queue is empty. * * @return the head of this queue, or {@code null} if this queue is empty */ E poll(); /** * Retrieves, but does not remove, the head of this queue. This method * differs from {@link #peek peek} only in that it throws an exception * if this queue is empty. * * @return the head of this queue * @throws NoSuchElementException if this queue is empty */ E element(); /** * Retrieves, but does not remove, the head of this queue, * or returns {@code null} if this queue is empty. * * @return the head of this queue, or {@code null} if this queue is empty */ E peek(); }
element()和peek()均是獲取隊列頭元素,但不刪除,區別是在隊列空時peek()返回null,element()拋出NoSuchElementException。
對於隊列來講,咱們知道,普通隊列的特性是先進先出(區別於棧的先進後出),好比LinkedList,隊列的特有方法offer()和poll()就是用來對隊列插入和取出元素的,而另外兩個方法add()和remove()一般也是在隊列尾插入在隊列頭取出,所以一般與offer()和poll()實現的實際上是同樣的,可是在使用隊列時,使用後者方法名語義更強。而PriorityQueue與普通的隊列不一樣,由於元素有優先級,因此不具有先進先出的特色,下面看PriorityQueue的源碼,仍是先來看字段信息:
/** * Priority queue represented as a balanced binary heap: the two * children of queue[n] are queue[2*n+1] and queue[2*(n+1)]. The * priority queue is ordered by comparator, or by the elements' * natural ordering, if comparator is null: For each node n in the * heap and each descendant d of n, n <= d. The element with the * lowest value is in queue[0], assuming the queue is nonempty. */ transient Object[] queue; // non-private to simplify nested class access /** * The number of elements in the priority queue. */ private int size = 0; /** * The comparator, or null if priority queue uses elements' * natural ordering. */ private final Comparator<? super E> comparator; /** * The number of times this priority queue has been * <i>structurally modified</i>. See AbstractList for gory details. */ transient int modCount = 0; // non-private to simplify nested class access
很明顯,queue是底層存儲元素的隊列,是一個Object數組,可是不一樣的是,該數組其實表明的是一顆平衡二叉堆(上面的元素小,下面的元素大),queue[n]爲父節點時,queue[2n+1]爲左孩子,queue[2(n+1)]爲右孩子,其大小關係也就是優先級關係經過comparator做比較決定。
繼續看插入過程的源碼:
public boolean offer(E e) { if (e == null) throw new NullPointerException(); modCount++; int i = size; if (i >= queue.length) grow(i + 1); size = i + 1; if (i == 0) queue[0] = e; else siftUp(i, e); return true; } private void siftUp(int k, E x) { if (comparator != null) siftUpUsingComparator(k, x); else siftUpComparable(k, x); } private void siftUpUsingComparator(int k, E x) { while (k > 0) { int parent = (k - 1) >>> 1; Object e = queue[parent]; if (comparator.compare(x, (E) e) >= 0) break; queue[k] = e; k = parent; } queue[k] = x; }
能夠看到,插入的過程充分體現堆的特色,從二叉樹的最後的位置依次向上與父節點比較,比父節點小(優先級大)則將父節點下移,繼續比較,知道比父節點大中止而且插入,目的是將優先級小的放到堆的下面使得取出時優先取出優先級大的元素,下面看取出元素的方法:
public E poll() { if (size == 0) return null; int s = --size; modCount++; E result = (E) queue[0]; E x = (E) queue[s]; queue[s] = null; if (s != 0) siftDown(0, x); return result; } private void siftDown(int k, E x) { if (comparator != null) siftDownUsingComparator(k, x); else siftDownComparable(k, x); } private void siftDownComparable(int k, E x) { Comparable<? super E> key = (Comparable<? super E>)x; int half = size >>> 1; // loop while a non-leaf while (k < half) { int child = (k << 1) + 1; // assume left child is least Object c = queue[child]; int right = child + 1; if (right < size && ((Comparable<? super E>) c).compareTo((E) queue[right]) > 0) c = queue[child = right]; if (key.compareTo((E) c) <= 0) break; queue[k] = c; k = child; } queue[k] = key; }
取出0號元素(即優先級最大的元素),而後從0號的左孩子依次向上填補直至最後的元素小於當前左孩子則填補。
Deque繼承於queue,區別在於Deque是一個雙端隊列,兩頭都可看成隊列頭或隊列尾。
比較有表明性的雙端隊列實現爲ArrayDeque,基於數組實現的雙端循環隊列,雖然使用對象模擬C語言前驅後繼指針的方式實現雙端循環隊列更爲直觀,可是在不須要隨機增刪節點狀況下在java使用數組實現比模擬鏈表開銷更小,下面直接看源碼:
/** * The array in which the elements of the deque are stored. * The capacity of the deque is the length of this array, which is * always a power of two. The array is never allowed to become * full, except transiently within an addX method where it is * resized (see doubleCapacity) immediately upon becoming full, * thus avoiding head and tail wrapping around to equal each * other. We also guarantee that all array cells not holding * deque elements are always null. */ transient Object[] elements; // non-private to simplify nested class access /** * The index of the element at the head of the deque (which is the * element that would be removed by remove() or pop()); or an * arbitrary number equal to tail if the deque is empty. */ transient int head; /** * The index at which the next element would be added to the tail * of the deque (via addLast(E), add(E), or push(E)). */ transient int tail;
咱們看到有兩個int字段,從名字上看也能判斷到是頭尾指針(並非說它是真的指針,就是那個效果至關於指針),方法源碼不贅述,循環的原理就是移動頭尾指針,而不須要說好比獲取0號元素後將後面元素依次向前移動,這種操做十分花費開銷,增長頭尾指針只需直接修改頭尾指針數值,所以整個數組雖然是線性的可是能夠實現環形的效果。
依然先看看字段信息,這個PRESENT先不理,直接看不太知道是什麼,先看這個map,看到這個map其實能夠猜HashSet的底層是經過HashMap儲存的了:
private transient HashMap<E,Object> map; // Dummy value to associate with an Object in the backing Map private static final Object PRESENT = new Object();
不過咱們還沒開始看HashMap的源碼,因此能夠先從註釋等方面簡單瞭解HashSet和HashMap的關係:
/** * This class implements the <tt>Set</tt> interface, backed by a hash table * (actually a <tt>HashMap</tt> instance). It makes no guarantees as to the * iteration order of the set; in particular, it does not guarantee that the * order will remain constant over time. This class permits the <tt>null</tt> * element. ... **/
首先基於HashTable(其實是HashMap),而後沒辦法保證Set的迭代順序,也沒辦法保證Set元素的順序不會由於時間而變化,同時容許空值null存入。接下來繼續往下看源碼,看看是怎麼經過HashMap存元素的:
public boolean add(E e) { return map.put(e, PRESENT)==null; }
原來,是吧Set的元素當成HashMap的Key值存儲,而Value值則存入PRESENT這個常量對象,利用了Map的Key值不能重複的特性。至此,對HashSet源代碼的瞭解已經足夠,翻看下面基本都是對HashMap的調用。
而LinkedHashSet則繼承HashSet,而後調用HashSet的另外一個構造器,以LinkedHashMap做爲底層存儲容器:
//輸入變量dummy只用做區分其餘以initialCapacity和loadFactor做爲輸入變量的構造函數 HashSet(int initialCapacity, float loadFactor, boolean dummy) { map = new LinkedHashMap<>(initialCapacity, loadFactor); }
仍然經過註釋簡單瞭解LinkedHashSet:
/** * <p>Hash table and linked list implementation of the <tt>Set</tt> interface, * with predictable iteration order. This implementation differs from * <tt>HashSet</tt> in that it maintains a doubly-linked list running through * all of its entries. This linked list defines the iteration ordering, * which is the order in which elements were inserted into the set * (<i>insertion-order</i>). Note that insertion order is <i>not</i> affected * if an element is <i>re-inserted</i> into the set. (An element <tt>e</tt> * is reinserted into a set <tt>s</tt> if <tt>s.add(e)</tt> is invoked when * <tt>s.contains(e)</tt> would return <tt>true</tt> immediately prior to * the invocation.)
經過HashTable(其實是HashMap)和LinkedList實現Set接口,不一樣於HashSet的是LinkedHashSet經過維護一個雙向鏈表來保證元素的順序,其順序則是元素的插入順序。
而TreeSet則繼承NavigableSet再繼承與SortedSet,以TreeMap做爲底層存儲容器,方法的實現一樣是調用TreeMap的方法:
/** * A {@link NavigableSet} implementation based on a {@link TreeMap}. * The elements are ordered using their {@linkplain Comparable natural * ordering}, or by a {@link Comparator} provided at set creation * time, depending on which constructor is used. * * <p>This implementation provides guaranteed log(n) time cost for the basic * operations ({@code add}, {@code remove} and {@code contains}). *
TreeSet的元素會使用Comparator對元素大小進行排序,不過所以add和remove以及contains操做將會花費log(n)的時間複雜度(比HashSet多)。
先來提取一下注釋中的重點:
* <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>. The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created. The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased. When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets.
這裏說有兩個參數是會影響HashMap的性能的,一個是initial capacity(初始容量),另外一個是load factor(暫且稱做負荷係數),initial capacity就建立HashMap時Hash表的大小,load factor則是用來觸發Hash表自動擴容的標準衡量值。當HashMap中的實體數量超過了load factor和當前容量的乘積,HashMap將會觸發rehash,調整一下整個哈希表的結構,通常來講調整一次會將Hash表容量編程原來的兩倍。
* This map usually acts as a binned (bucketed) hash table, but * when bins get too large, they are transformed into bins of * TreeNodes, each structured similarly to those in * java.util.TreeMap. Most methods try to use normal bins, but * relay to TreeNode methods when applicable (simply by checking * instanceof a node). Bins of TreeNodes may be traversed and * used like any others, but additionally support faster lookup * when overpopulated. However, since the vast majority of bins in * normal use are not overpopulated, checking for existence of * tree bins may be delayed in the course of table methods. *
註釋把hash表的每個格比喻成箱子,箱子裏面存儲的元素(有時爲hash衝突的多個元素)有兩種結構,這兩種狀況對應該箱子有兩種稱呼,一是normal bins,這是通常狀況,箱子中元素較少的時候,以鏈表形式鏈接各個元素,第二種是tree bins,此時爲由於hash相同放到同一個箱子中元素較多時,這些元素將轉化成一種叫紅黑樹的結構儲存。
有了上述基本認識後,正式看源碼。
首先,固然先看字段信息:
/** * The table, initialized on first use, and resized as * necessary. When allocated, length is always a power of two. * (We also tolerate length zero in some operations to allow * bootstrapping mechanics that are currently not needed.) */ transient Node<K,V>[] table;
table,顯然是底層存儲map元素的Hash表,是個Node數組,每一項就是註釋中所說的bin
上面說了箱子裏的元素有兩種結構,一開始爲Node構成的鏈表,當箱子內元素數量達到超過8個則轉化成TreeNode構成的紅黑樹
/** * Basic hash bin node, used for most entries. (See below for * TreeNode subclass, and in LinkedHashMap for its Entry subclass.) */ static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; V value; Node<K,V> next; ... /** * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn * extends Node) so can be used as extension of either regular or * linked node. */ static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> { TreeNode<K,V> parent; // red-black tree links TreeNode<K,V> left; TreeNode<K,V> right; TreeNode<K,V> prev; // needed to unlink next upon deletion boolean red; TreeNode(int hash, K key, V val, Node<K,V> next) { super(hash, key, val, next); } ...
有兩種狀況箱子結構會從紅黑樹變回鏈表:
一是在紅黑樹中刪除節點後,該紅黑樹節點太少時。下面截取註釋片斷和代碼片斷,具體看HashMap.TreeNode的removeTreeNode方法。
...If the current tree appears to have too few nodes, * the bin is converted back to a plain bin. (The test triggers * somewhere between 2 and 6 nodes, depending on tree structure). */ final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab, boolean movable) { ... if (root == null || root.right == null || (rl = root.left) == null || rl.left == null) { tab[index] = first.untreeify(map); // too small return; }
二是擴容後,元素從新分配位置時,原本在紅黑樹中的節點也會由於擴容而一部分節點分開,在進行分開操做時同時檢查分開後的兩部分元素數量是否小於等於6,是則構造鏈表,不是則從新構造紅黑樹或保持本來紅黑樹結構:
/** * Splits nodes in a tree bin into lower and upper tree bins, * or untreeifies if now too small. Called only from resize; * see above discussion about split bits and indices. ... */ final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) { ... //經過hash計算後在低位置的元素 if (loHead != null) { if (lc <= UNTREEIFY_THRESHOLD) tab[index] = loHead.untreeify(map); else { tab[index] = loHead; if (hiHead != null) // (else is already treeified) loHead.treeify(tab); } } //經過hash計算後在高位置的元素 if (hiHead != null) { if (hc <= UNTREEIFY_THRESHOLD) tab[index + bit] = hiHead.untreeify(map); else { tab[index + bit] = hiHead; if (loHead != null) hiHead.treeify(tab); } } }
/** * Holds cached entrySet(). Note that AbstractMap fields are used * for keySet() and values(). */ transient Set<Map.Entry<K,V>> entrySet;
entrySet,用於儲存全部元素的Set集合,一樣的還有繼承自AbstractMap的keySet和values,它們是比較特殊的Set,不是想象的那樣從新存儲一遍HashMap的全部元素,它們均不存儲元素,只是在使用時經過調用HashMap的迭代器獲取元素。
/** * The number of key-value mappings contained in this map. */ transient int size; /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */ transient int modCount; /** * The next size value at which to resize (capacity * load factor). * * @serial */ // (The javadoc description is true upon serialization. // Additionally, if the table array has not been allocated, this // field holds the initial array capacity, or zero signifying // DEFAULT_INITIAL_CAPACITY.) int threshold; /** * The load factor for the hash table. * * @serial */ final float loadFactor;
threshold,是一個閾值,達到這個閾值進行rehash操做(調用resize()),而後threshold增大,threshold的值在HashMap實例化後爲大於initialCapacity的第一個2次冪數,以後的增大的值爲本來的兩倍。
下面重點解析插入元素的方法:
public V put(K key, V value) { return putVal(hash(key), key, value, false, true); } final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { Node<K,V>[] tab; Node<K,V> p; int n, i; //table空,則進行第一次擴容 if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; //經過hash計算的位置無佔用則直接將引用指向一個新的Node if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null); //經過hash計算的位置有佔用 else { Node<K,V> e; K k; //佔用的元素key值相同,則先增長e對佔用元素的引用,後續進行替換value值 if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; //佔用的元素爲TreeNode,也就是該位置下是一顆紅黑樹,則調用TreeNode的putTreeVal方法插入節點 else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); //佔用的元素爲普通的Node,遍歷到鏈表尾添加節點 else { for (int binCount = 0; ; ++binCount) { if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); //在鏈表中添加節點後箱子元素數量達到8則轉換成紅黑樹 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st treeifyBin(tab, hash); break; } //鏈表中的元素與插入元素key值相同,跳出循環後續替換value值 if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; p = e; } } //key值相同時統一在此替換value值,並直接返回,由於元素數量無變化 if (e != null) { // existing mapping for key V oldValue = e.value; if (!onlyIfAbsent || oldValue == null) e.value = value; afterNodeAccess(e); return oldValue; } } ++modCount; //元素數量大於閾值則進行擴容操做 if (++size > threshold) resize(); afterNodeInsertion(evict); return null; }
配合流程圖更好理解:
理解了插入流程後,關於刪除和查找其實已經不難,惟一的重點則是關於紅黑樹的內容。
紅黑樹是一個平衡的二叉樹,但不是一個完美的平衡二叉樹。雖然咱們但願一個全部查找都保持在O(lgn)的時間複雜度,可是這樣在動態插入中保持樹的完美平衡代價過高,因此,咱們稍微放鬆一下限制,但願找到一個能在對數時間內完成查找的數據結構。這個時候,紅黑樹站了出來。
首先,一顆紅黑樹須要知足如下五條性質:
性質一:節點是紅色或者是黑色;
在樹裏面的節點不是紅色的就是黑色的,沒有其餘顏色,要不怎麼叫紅黑樹呢,是吧;
性質二:根節點是黑色;
根節點老是黑色的。它不能爲紅;
性質三:每一個葉節點(NIL或空節點)是黑色;
性質四:每一個紅色節點的兩個子節點都是黑色的(也就是說不存在兩個連續的紅色節點);
就是連續的兩個節點不能是連續的紅色,連續的兩個節點的意思就是父節點與子節點不能是連續的紅色;
性質五:從任一節點到其沒個葉節點的全部路徑都包含相同數目的黑色節點。
爲了保證全部的插入刪除操做都使紅黑樹保持這五個性質,需先知道平衡二叉樹的兩個基本操做:
左旋
右旋
基本介紹完了,下面是紅黑樹插入和刪除操做的介紹:
由於要知足紅黑樹的五條性質,若是咱們插入的是黑色節點,那就違反了性質五,須要進行大規模調整,若是咱們插入的是紅色節點,那就只有在要插入節點的父節點也是紅色的時候違反性質四或者是當插入的節點是根節點時,違反性質二,因此,咱們把要插入的節點的顏色變成紅色。
插入節點的父節點爲黑色或插入節點爲根節點時,直接插入:
1.插入的節點是根節點,直接插入黑色根節點
2.插入的節點的父節點是黑色節點,直接插入紅色節點
插入節點的父節點爲紅色,沒法直接插入,分三種狀況考慮:(如下例子均假設插入節點的父節點是祖父節點的左支,右支的狀況爲鏡像)
下面咱們再講講刪除的操做:
首先你要了解普通二叉樹的刪除操做:
1.若是刪除的是葉節點,能夠直接刪除;
2.若是被刪除的元素有一個子節點,能夠將子節點直接移到被刪除元素的位置;
3.若是有兩個子節點,這時候就能夠把被刪除元素的右支的最小節點(被刪除元素右支的最左邊的節點)和被刪除元素互換,咱們把被刪除元素右支的最左邊的節點稱之爲後繼節點(後繼元素),而後在根據狀況1或者狀況2進行操做。如圖:
將被刪除元素與其右支的最小元素互換,變成以下圖所示:
而後再將被刪除元素刪除:
下面所稱的被刪除元素,皆是指已經互換以後的被刪除元素。
加入顏色以後,被刪除元素和後繼元素互換隻是值得互換,並不互換顏色,這個要注意。
下面開始講一下紅黑樹刪除的規則:
1.當被刪除元素爲紅時,對五條性質沒有什麼影響,直接刪除。
2.當被刪除元素爲黑且爲根節點時,直接刪除。
3.當被刪除元素爲黑,且有一個右子節點爲紅時,將右子節點塗黑放到被刪除元素的位置。如圖:
由
變成
4.當被刪除元素爲黑,且兄弟節點爲黑,兄弟節點兩個孩子也爲黑,父節點爲紅,此時,交換兄弟節點與父節點的顏色;NIL元素是指每一個葉節點都有兩個空的,顏色爲黑的NIL元素,須要他的時候就能夠把它當作兩個黑元素,不須要的時候能夠忽視他。 如圖:
由
變成:
5.當被刪除元素爲黑、而且爲父節點的左支,且兄弟顏色爲黑,兄弟的右支爲紅色,這個時候須要交換兄弟與父親的顏色,並把父親塗黑、兄弟的右支塗黑,並以父節點爲中心左轉。如圖:
由:
變成:
6.當被刪除元素爲黑、而且爲父節點的左支,且兄弟顏色爲黑,兄弟的左支爲紅色,這個時候須要先把兄弟與兄弟的左子節點顏色互換,進行右轉,而後就變成了規則5同樣了,在按照規則5進行旋轉。如圖:
由
先兄弟與兄弟的左子節點顏色互換,進行右轉,變成:
而後在按照規則5進行旋轉,變成:
7.當被刪除元素爲黑且爲父元素的右支時,跟狀況5.狀況6 互爲鏡像。
8.被刪除元素爲黑且兄弟節點爲黑,兄弟節點的孩子爲黑,父親爲黑,這個時候須要將兄弟節點變爲紅,再把父親看作那個被刪除的元素(只是看作,實際上不刪除),看看父親符和哪一條刪除規則,進行處理變化如圖:
由:
變成:
8.當被刪除的元素爲黑,且爲父元素的左支,兄弟節點爲紅色的時候,須要交換兄弟節點與父親結點的顏色,以父親結點進行左旋,就變成了狀況4,在按照狀況四進行操做便可,變化以下:
由:
交換兄弟節點與父親結點的顏色,以父親結點進行左旋 變成:
在按照狀況四進行操做,變成:
好了,刪除的步驟也講完,沒有講到的一點就是,在添加刪除的時候,時刻要記得更改根元素的顏色爲黑。
TreeMap繼承自NavigableMap和SortedMap,在Map的基礎上增長了獲取特定範圍的元素(如大於某個值的全部元素,最小的元素),同時由於其底層是紅黑樹結構,其查找、插入和刪除的操做都能保證在O(n)的時間複雜度內完成,最優狀況下時間複雜度爲O(logN)
* <p>This implementation provides guaranteed log(n) time cost for the * {@code containsKey}, {@code get}, {@code put} and {@code remove} * operations. Algorithms are adaptations of those in Cormen, Leiserson, and * Rivest's <em>Introduction to Algorithms</em>.
繼續看字段信息:
/** * The comparator used to maintain order in this tree map, or * null if it uses the natural ordering of its keys. * * @serial */ private final Comparator<? super K> comparator; private transient Entry<K,V> root; /** * The number of entries in the tree */ private transient int size = 0; /** * The number of structural modifications to the tree. */ private transient int modCount = 0;
其中有兩個字段是最重要的,一個是comparator,由於TreeMap是一個有序的Map,因此comparator是很是的重要;二是root,從名字來看就知道這是紅黑樹的根指針,整個TreeMap就是一顆紅黑樹。基本的紅黑樹也講解了,在此不作贅述。
LinkedHashMap其實就是在HashMap的基礎上增長鏈表用以記錄元素插入順序,那麼它是怎麼維護這個鏈表的呢?從源碼中咱們首先咱們發現,LinkedHashMap的實體類繼承了HashMap的實體類並在此基礎上增長先後指針:
static class Entry<K,V> extends HashMap.Node<K,V> { Entry<K,V> before, after; Entry(int hash, K key, V value, Node<K,V> next) { super(hash, key, value, next); } }
LinkedHashMap的普通元素(區別於樹元素)和HashMap直接使用的是不相同的,可是從LinkedHashMap的構造方法來看,LinkedHashMap又是直接構造HashMap實例來存儲的,並且並無修改插入的方法,也就是說,插入元素使用的是HashMap的put方法,那這二者是怎麼區別出來的呢?咱們回頭看看HashMap的put方法:
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { Node<K,V>[] tab; Node<K,V> p; int n, i; if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; if ((p = tab[i = (n - 1) & hash]) == null) //1 tab[i] = newNode(hash, key, value, null); else { Node<K,V> e; K k; if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; //2 else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); ...
有兩個須要注意的地方,在上面代碼用註釋標記出來了,第一個是插入普通節點(區別於樹節點)的地方,咱們看到,該方法在建立新節點時並非直接使用構造方法構造node,而是使用了newNode方法:
// Create a regular (non-tree) node Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) { return new Node<>(hash, key, value, next); }
而LinkedHashMap一樣有這個方法,也就是說,LinkedHashMap是經過覆蓋了newNode方法實現建立帶有前驅後繼指針的節點的,而上面插入方法中第二個須要注意的地方就是插入樹節點的地方,這個其實也是同樣的,在putTreeVal中一樣調用newTreeNode方法:
//HashMap Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) { return new Node<>(hash, key, value, next); } TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) { return new TreeNode<>(hash, key, value, next); } //LinkedHashMap Node<K,V> newNode(int hash, K key, V value, Node<K,V> e) { LinkedHashMap.Entry<K,V> p = new LinkedHashMap.Entry<K,V>(hash, key, value, e); linkNodeLast(p); return p; } TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) { TreeNode<K,V> p = new TreeNode<K,V>(hash, key, value, next); linkNodeLast(p); return p; }
對比二者的新建節點的方法不難發現,除了構造不一樣的普通節點外,LinkedHashMap的方法還新增了調用linkNodeLast方法,這個方法其實就是將該節點鏈接到鏈表的最後。
在插入節點是直接替換值時又是怎樣的呢?仍然看put方法:
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { ... if (e != null) { // existing mapping for key V oldValue = e.value; if (!onlyIfAbsent || oldValue == null) e.value = value; afterNodeAccess(e); return oldValue; } ...
在替換value值後,調用了afterNodeAccess方法,這個方法在HashMap是空的,在LinkedHashMap被覆蓋了:
void afterNodeAccess(Node<K,V> e) { // move node to last LinkedHashMap.Entry<K,V> last; if (accessOrder && (last = tail) != e) { LinkedHashMap.Entry<K,V> p = (LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after; p.after = null; if (b == null) head = a; else b.after = a; if (a != null) a.before = b; else last = b; if (last == null) head = p; else { p.before = last; last.after = p; } tail = p; ++modCount; } }
方法的目的是爲了將剛修改的節點從鏈表中間調換到最後。
在插入節點的最後,還調用了一個方法afterNodeInsertion,先看該方法的代碼:
void afterNodeInsertion(boolean evict) { // possibly remove eldest LinkedHashMap.Entry<K,V> first; if (evict && (first = head) != null && removeEldestEntry(first)) { K key = first.key; removeNode(hash(key), key, null, false, true); } }
咱們發現該方法是用於在鏈表中刪除第一個節點的,可是if中的條件永遠爲false,那很奇怪這個方法到底有什麼用?查看removeEldestEntry的方法註釋:
/** * Returns <tt>true</tt> if this map should remove its eldest entry. * This method is invoked by <tt>put</tt> and <tt>putAll</tt> after * inserting a new entry into the map. It provides the implementor * with the opportunity to remove the eldest entry each time a new one * is added. This is useful if the map represents a cache: it allows * the map to reduce memory consumption by deleting stale entries. * * <p>Sample use: this override will allow the map to grow up to 100 * entries and then delete the eldest entry each time a new entry is * added, maintaining a steady state of 100 entries. * <pre> * private static final int MAX_ENTRIES = 100; * * protected boolean removeEldestEntry(Map.Entry eldest) { * return size() > MAX_ENTRIES; * } * </pre>
註釋說道,是爲了採用LinkedHashMap設計緩存類時使用的,經過覆蓋removeEldestEntry方法設置對應的條件返回false以達到下降內存使用的目的。
一樣的,刪除節點時一樣有一個方法調用用於在鏈表中刪除該節點,方法比較簡單,很少介紹:
void afterNodeRemoval(Node<K,V> e) { // unlink LinkedHashMap.Entry<K,V> p = (LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after; p.before = p.after = null; if (b == null) head = a; else b.after = a; if (a == null) tail = b; else a.before = b; } protected boolean removeEldestEntry(Map.Entry<K,V> eldest) { return false; }