在1.8版本之前,ConcurrentHashMap採用分段鎖的概念,使鎖更加細化,可是1.8已經改變了這種思路,而是利用CAS+Synchronized來保證併發更新的安全,固然底層採用數組+鏈表+紅黑樹的存儲結構。java
ConcurrentHashMap定義了以下幾個常量:node
// 最大容量:2^30=1073741824
private static final int MAXIMUM_CAPACITY = 1 << 30;
// 默認初始值,必須是2的幕數
private static final int DEFAULT_CAPACITY = 16;
//
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
//
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
//
private static final float LOAD_FACTOR = 0.75f;
// 鏈表轉紅黑樹閥值,> 8 鏈表轉換爲紅黑樹
static final int TREEIFY_THRESHOLD = 8;
//樹轉鏈表閥值,小於等於6(tranfer時,lc、hc=0兩個計數器分別++記錄
//原bin、新binTreeNode數量,<=UNTREEIFY_THRESHOLD 則untreeify(lo))
static final int UNTREEIFY_THRESHOLD = 6;
//
static final int MIN_TREEIFY_CAPACITY = 64;
//
private static final int MIN_TRANSFER_STRIDE = 16;
//
private static int RESIZE_STAMP_BITS = 16;
// 2^15-1,help resize的最大線程數
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
// 32-16=16,sizeCtl中記錄size大小的偏移量
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
// forwarding nodes的hash值
static final int MOVED = -1;
// 樹根節點的hash值
static final int TREEBIN = -2;
// ReservationNode的hash值
static final int RESERVED = -3;
// 可用處理器數量
static final int NCPU = Runtime.getRuntime().availableProcessors();
複製代碼
上面是ConcurrentHashMap定義的常量。下面介紹ConcurrentHashMap幾個很重要的概念數組
table:用來存放Node節點數據的,默認爲null,默認大小爲16的數組,每次擴容時大小老是2的冪次方;安全
nextTable:擴容時新生成的數據,數組爲table的兩倍;數據結構
Node:節點,保存key-value的數據結構;多線程
ForwardingNode:一個特殊的Node節點,hash值爲-1,其中存儲nextTable的引用。只有table發生擴容的時候,ForwardingNode纔會發揮做用,做爲一個佔位符放在table中表示當前節點爲null或則已經被移動併發
sizeCtl:控制標識符,用來控制table初始化和擴容操做的,在不一樣的地方有不一樣的用途,其值也不一樣,所表明的含義也不一樣app
負數表明正在進行初始化或擴容操做
-1表明正在初始化
-N 表示有N-1個線程正在進行擴容操做
正數或0表明hash表尚未被初始化,這個數值表示初始化或下一次進行擴容的大小
複製代碼
爲了實現ConcurrentHashMap,Doug Lea提供了許多內部類來進行輔助實現,如Node,TreeNode,TreeBin等等。dom
做爲ConcurrentHashMap中最核心、最重要的內部類,Node擔負着重要角色:key-value鍵值對。全部插入ConCurrentHashMap的中數據都將會包裝在Node中。定義以下:ide
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val; //帶有volatile,保證可見性
volatile Node<K,V> next; //下一個節點的指針
Node(int hash, K key, V val, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.val = val;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return val; }
public final int hashCode() { return key.hashCode() ^ val.hashCode(); }
public final String toString(){ return key + "=" + val; }
/** 不容許修改value的值 */
public final V setValue(V value) {
throw new UnsupportedOperationException();
}
public final boolean equals(Object o) {
Object k, v, u; Map.Entry<?,?> e;
return ((o instanceof Map.Entry) &&
(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&
(v = e.getValue()) != null &&
(k == key || k.equals(key)) &&
(v == (u = val) || v.equals(u)));
}
/** 賦值get()方法 */
Node<K,V> find(int h, Object k) {
Node<K,V> e = this;
if (k != null) {
do {
K ek;
if (e.hash == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
} while ((e = e.next) != null);
}
return null;
}
}
複製代碼
在Node內部類中,其屬性value、next都是帶有volatile的。同時其對value的setter方法進行了特殊處理(拋出UnsupportedOperationException異常),不容許直接調用其setter方法來修改value的值。最後Node還提供了find方法來賦值map.get()。
咱們知道HashMap的核心數據結構就是鏈表。在ConcurrentHashMap中就不同了,若是鏈表的數據過長是會轉換爲紅黑樹來處理。它並非直接轉換,而是將這些鏈表的節點包裝成TreeNode放在TreeBin對象中,而後由TreeBin完成紅黑樹的轉換。因此TreeNode也必須是ConcurrentHashMap的一個核心類,其爲樹節點類,定義以下:
static final class TreeNode<K,V> extends Node<K,V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next,
TreeNode<K,V> parent) {
super(hash, key, val, next);
this.parent = parent;
}
Node<K,V> find(int h, Object k) {
return findTreeNode(h, k, null);
}
//查找hash爲h,key爲k的節點
final TreeNode<K,V> findTreeNode(int h, Object k, Class<?> kc) {
if (k != null) {
TreeNode<K,V> p = this;
do {
int ph, dir; K pk; TreeNode<K,V> q;
TreeNode<K,V> pl = p.left, pr = p.right;
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if ((pk = p.key) == k || (pk != null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if ((kc != null ||
(kc = comparableClassFor(k)) != null) &&
(dir = compareComparables(kc, k, pk)) != 0)
p = (dir < 0) ? pl : pr;
else if ((q = pr.findTreeNode(h, k, kc)) != null)
return q;
else
p = pl;
} while (p != null);
}
return null;
}
}
複製代碼
源碼展現TreeNode繼承Node,且提供了findTreeNode用來查找查找hash爲h,key爲k的節點。
該類並不負責key-value的鍵值對包裝,它用於在鏈表轉換爲紅黑樹時包裝TreeNode節點,也就是說ConcurrentHashMap紅黑樹存放是TreeBin,不是TreeNode。
該類封裝了一系列的方法,包括putTreeVal、lookRoot、UNlookRoot、remove、balanceInsetion、balanceDeletion。因爲TreeBin的代碼太長咱們這裏只展現構造方法(構造方法就是構造紅黑樹的過程):
static final class TreeBin<K,V> extends Node<K,V> {
TreeNode<K, V> root;
volatile TreeNode<K, V> first;
volatile Thread waiter;
volatile int lockState;
static final int WRITER = 1; // set while holding write lock
static final int WAITER = 2; // set when waiting for write lock
static final int READER = 4; // increment value for setting read lock
TreeBin(TreeNode<K, V> b) {
super(TREEBIN, null, null, null);
this.first = b;
TreeNode<K, V> r = null;
for (TreeNode<K, V> x = b, next; x != null; x = next) {
next = (TreeNode<K, V>) x.next;
x.left = x.right = null;
if (r == null) {
x.parent = null;
x.red = false;
r = x;
} else {
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K, V> p = r; ; ) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K, V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
r = balanceInsertion(r, x);
break;
}
}
}
}
this.root = r;
assert checkInvariants(root);
}
/** 省略不少代碼 */
}
複製代碼
經過構造方法是否是發現了部分端倪,構造方法就是在構造一個紅黑樹的過程。
這是一個真正的輔助類,該類僅僅只存活在ConcurrentHashMap擴容操做時。只是一個標誌節點,而且指向nextTable,它提供find方法而已。該類也是集成Node節點,其hash爲-1,key、value、next均爲null。以下:
static final class ForwardingNode<K,V> extends Node<K,V> {
final Node<K,V>[] nextTable;
ForwardingNode(Node<K,V>[] tab) {
super(MOVED, null, null, null);
this.nextTable = tab;
}
Node<K,V> find(int h, Object k) {
// loop to avoid arbitrarily deep recursion on forwarding nodes
outer: for (Node<K,V>[] tab = nextTable;;) {
Node<K,V> e; int n;
if (k == null || tab == null || (n = tab.length) == 0 ||
(e = tabAt(tab, (n - 1) & h)) == null)
return null;
for (;;) {
int eh; K ek;
if ((eh = e.hash) == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
if (eh < 0) {
if (e instanceof ForwardingNode) {
tab = ((ForwardingNode<K,V>)e).nextTable;
continue outer;
}
else
return e.find(h, k);
}
if ((e = e.next) == null)
return null;
}
}
}
}
複製代碼
ConcurrentHashMap提供了一系列的構造函數用於建立ConcurrentHashMap對象:
public ConcurrentHashMap() {
}
public ConcurrentHashMap(int initialCapacity) {
if (initialCapacity < 0)
throw new IllegalArgumentException();
int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
MAXIMUM_CAPACITY :
tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
this.sizeCtl = cap;
}
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
複製代碼
ConcurrentHashMap的初始化主要由initTable()方法實現,在上面的構造函數中咱們能夠看到,其實ConcurrentHashMap在構造函數中並無作什麼事,僅僅只是設置了一些參數而已。其真正的初始化是發生在插入的時候,例如put、merge、compute、computeIfAbsent、computeIfPresent操做時。其方法定義以下:
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
while ((tab = table) == null || tab.length == 0) {
//sizeCtl < 0 表示有其餘線程在初始化,該線程必須掛起
if ((sc = sizeCtl) < 0)
Thread.yield();
// 若是該線程獲取了初始化的權利,則用CAS將sizeCtl設置爲-1,表示本線程正在初始化
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
// 進行初始化
try {
if ((tab = table) == null || tab.length == 0) {
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
table = tab = nt;
// 下次擴容的大小
sc = n - (n >>> 2); ///至關於0.75*n 設置一個擴容的閾值
}
} finally {
sizeCtl = sc;
}
break;
}
}
return tab;
}
複製代碼
初始化方法initTable()的關鍵就在於sizeCtl,該值默認爲0,若是在構造函數時有參數傳入該值則爲2的冪次方。該值若是 < 0,表示有其餘線程正在初始化,則必須暫停該線程。若是線程得到了初始化的權限則先將sizeCtl設置爲-1,防止有其餘線程進入,最後將sizeCtl設置0.75 * n,表示擴容的閾值。
ConcurrentHashMap最經常使用的put、get操做,ConcurrentHashMap的put操做與HashMap並無多大區別,其核心思想依然是根據hash值計算節點插入在table的位置,若是該位置爲空,則直接插入,不然插入到鏈表或者樹中。可是ConcurrentHashMap會涉及到多線程狀況就會複雜不少。咱們先看源代碼,而後根據源代碼一步一步分析:
public V put(K key, V value) {
return putVal(key, value, false);
}
final V putVal(K key, V value, boolean onlyIfAbsent) {
//key、value均不能爲null
if (key == null || value == null) throw new NullPointerException();
//計算hash值
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// table爲null,進行初始化工做
if (tab == null || (n = tab.length) == 0)
tab = initTable();
//若是i位置沒有節點,則直接插入,不須要加鎖
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null,
new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
// 有線程正在進行擴容操做,則先幫助擴容
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
//對該節點進行加鎖處理(hash值相同的鏈表的頭節點),對性能有點兒影響
synchronized (f) {
if (tabAt(tab, i) == f) {
//fh > 0 表示爲鏈表,將該節點插入到鏈表尾部
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
//hash 和 key 都同樣,替換value
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
//putIfAbsent()
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
//鏈表尾部 直接插入
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
//樹節點,按照樹的插入操做進行插入
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
if (binCount != 0) {
// 若是鏈表長度已經達到臨界值8 就須要把鏈表轉換爲樹結構
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
//size + 1
addCount(1L, binCount);
return null;
}
複製代碼
按照上面的源碼,咱們能夠肯定put整個流程以下:
判空;ConcurrentHashMap的key、value都不容許爲null
計算hash。利用方法計算hash值。
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
複製代碼
遍歷table,進行節點插入操做,過程以下:
調用addCount方法,ConcurrentHashMap的size + 1
這裏整個put操做已經完成。
ConcurrentHashMap的get操做仍是挺簡單的,無非就是經過hash來找key相同的節點而已,固然須要區分鏈表和樹形兩種狀況。
public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
// 計算hash
int h = spread(key.hashCode());
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
// 搜索到的節點key與傳入的key相同且不爲null,直接返回這個節點
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
// 樹
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
// 鏈表,遍歷
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}
複製代碼
get操做的整個邏輯很是清楚:
ConcurrentHashMap的size()方法返回的是一個不精確的值,由於在進行統計的時候有其餘線程正在進行插入和刪除操做。
爲了更好地統計size,ConcurrentHashMap提供了 baseCount、counterCells 兩個輔助變量和一個CounterCell輔助內部類。
@sun.misc.Contended static final class CounterCell {
volatile long value;
CounterCell(long x) { value = x; }
}
//ConcurrentHashMap中元素個數,但返回的不必定是當前Map的真實元素個數。基於CAS無鎖更新
private transient volatile long baseCount;
private transient volatile CounterCell[] counterCells;
複製代碼
這裏咱們須要清楚CounterCell 的定義
size()方法定義以下:
public int size() {
long n = sumCount();
return ((n < 0L) ? 0 :
(n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
(int)n);
}
複製代碼
內部調用sunmCount():
final long sumCount() {
CounterCell[] as = counterCells; CounterCell a;
long sum = baseCount;
if (as != null) {
for (int i = 0; i < as.length; ++i) {
//遍歷,全部counter求和
if ((a = as[i]) != null)
sum += a.value;
}
}
return sum;
}
複製代碼
sumCount()就是迭代counterCells來統計sum的過程。咱們知道put操做時,確定會影響size(),咱們就來看看CouncurrentHashMap是如何爲了這個不和諧的size()操碎了心。
在put()方法最後會調用addCount()方法,該方法主要作兩件事,一件更新baseCount的值,第二件檢測是否進行擴容,咱們只看更新baseCount部分:
private final void addCount(long x, int check) {
CounterCell[] as; long b, s;
// s = b + x,完成baseCount++操做;
if ((as = counterCells) != null ||
!U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
CounterCell a; long v; int m;
boolean uncontended = true;
if (as == null || (m = as.length - 1) < 0 ||
(a = as[ThreadLocalRandom.getProbe() & m]) == null ||
!(uncontended =
U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
// 多線程CAS發生失敗時執行
fullAddCount(x, uncontended);
return;
}
if (check <= 1)
return;
s = sumCount();
}
// 檢查是否進行擴容
}
複製代碼
x == 1,若是counterCells == null,則U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x),若是併發競爭比較大可能會致使改過程失敗,若是失敗則最終會調用fullAddCount()方法。
其實爲了提升高併發的時候baseCount可見性的失敗問題,又避免一直重試,JDK 8 引入了類Striped64,其中LongAdder和DoubleAdder都是基於該類實現的,而 CounterCell 也是基於Striped64實現的。若是counterCells != null,且uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x)也失敗了,一樣會調用fullAddCount()方法,最後調用sumCount()計算s。
其實在1.8中,它不推薦size()方法,而是推崇mappingCount()方法,該方法的定義和size()方法基本一致:
public long mappingCount() {
long n = sumCount();
return (n < 0L) ? 0L : n; // ignore transient negative values
}
複製代碼
當ConcurrentHashMap中table元素個數達到了容量閾值(sizeCtl)時,則須要進行擴容操做。在put操做時最後一個會調用addCount(long x, int check),該方法主要作兩個工做:1.更新baseCount;2.檢測是否須要擴容操做。以下:
private final void addCount(long x, int check) {
CounterCell[] as; long b, s;
// 更新baseCount
//check >= 0 :則須要進行擴容操做
if (check >= 0) {
Node<K,V>[] tab, nt; int n, sc;
while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
(n = tab.length) < MAXIMUM_CAPACITY) {
int rs = resizeStamp(n);
if (sc < 0) {
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
//當前線程是惟一的或是第一個發起擴容的線程 此時nextTable=null
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
transfer(tab, null);
s = sumCount();
}
}
}
複製代碼
transfer()方法爲ConcurrentHashMap擴容操做的核心方法。因爲ConcurrentHashMap支持多線程擴容,並且也沒有進行加鎖,因此實現會變得有點兒複雜。整個擴容操做分爲兩步:
咱們先來看看源代碼,而後再一步一步分析:
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
// 每核處理的量小於16,則強制賦值16
if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
stride = MIN_TRANSFER_STRIDE; // subdivide range
if (nextTab == null) { // initiating
try {
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1]; //構建一個nextTable對象,其容量爲原來容量的兩倍
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
sizeCtl = Integer.MAX_VALUE;
return;
}
nextTable = nextTab;
transferIndex = n;
}
int nextn = nextTab.length;
// 鏈接點指針,用於標誌位(fwd的hash值爲-1,fwd.nextTable=nextTab)
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
// 當advance == true時,代表該節點已經處理過了
boolean advance = true;
boolean finishing = false; // to ensure sweep before committing nextTab
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
// 控制 --i ,遍歷原hash表中的節點
while (advance) {
int nextIndex, nextBound;
if (--i >= bound || finishing)
advance = false;
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
// 用CAS計算獲得的transferIndex
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
bound = nextBound;
i = nextIndex - 1;
advance = false;
}
}
if (i < 0 || i >= n || i + n >= nextn) {
int sc;
// 已經完成全部節點複製了
if (finishing) {
nextTable = null;
table = nextTab; // table 指向nextTable
sizeCtl = (n << 1) - (n >>> 1); // sizeCtl閾值爲原來的1.5倍
return; // 跳出死循環,
}
// CAS 更擴容閾值,在這裏面sizectl值減一,說明新加入一個線程參與到擴容操做
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
return;
finishing = advance = true;
i = n; // recheck before commit
}
}
// 遍歷的節點爲null,則放入到ForwardingNode 指針節點
else if ((f = tabAt(tab, i)) == null)
advance = casTabAt(tab, i, null, fwd);
// f.hash == -1 表示遍歷到了ForwardingNode節點,意味着該節點已經處理過了
// 這裏是控制併發擴容的核心
else if ((fh = f.hash) == MOVED)
advance = true; // already processed
else {
// 節點加鎖
synchronized (f) {
// 節點複製工做
if (tabAt(tab, i) == f) {
Node<K,V> ln, hn;
// fh >= 0 ,表示爲鏈表節點
if (fh >= 0) {
// 構造兩個鏈表 一個是原鏈表 另外一個是原鏈表的反序排列
int runBit = fh & n;
Node<K,V> lastRun = f;
for (Node<K,V> p = f.next; p != null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
for (Node<K,V> p = f; p != lastRun; p = p.next) {
int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
// 在nextTable i 位置處插上鍊表
setTabAt(nextTab, i, ln);
// 在nextTable i + n 位置處插上鍊表
setTabAt(nextTab, i + n, hn);
// 在table i 位置處插上ForwardingNode 表示該節點已經處理過了
setTabAt(tab, i, fwd);
// advance = true 能夠執行--i動做,遍歷節點
advance = true;
}
// 若是是TreeBin,則按照紅黑樹進行處理,處理邏輯與上面一致
else if (f instanceof TreeBin) {
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
for (Node<K,V> e = t.first; e != null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null, null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
else
hiTail.next = p;
hiTail = p;
++hc;
}
}
// 擴容後樹節點個數若<=6,將樹轉鏈表
ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
(hc != 0) ? new TreeBin<K,V>(lo) : t;
hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
(lc != 0) ? new TreeBin<K,V>(hi) : t;
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
}
}
}
}
}
複製代碼
上面的源碼有點兒長,稍微複雜了一些,在這裏咱們拋棄它多線程環境,咱們從單線程角度來看:
在多線程環境下,ConcurrentHashMap用兩點來保證正確性:ForwardingNode和synchronized。當一個線程遍歷到的節點若是是ForwardingNode,則繼續日後遍歷,若是不是,則將該節點加鎖,防止其餘線程進入,完成後設置ForwardingNode節點,以便要其餘線程能夠看到該節點已經處理過了,如此交叉進行,高效而又安全。
下圖是擴容的過程:
在put操做時若是發現fh.hash = -1,則表示正在進行擴容操做,則當前線程會協助進行擴容操做。
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
複製代碼
helpTransfer()方法爲協助擴容方法,當調用該方法的時候,nextTable必定已經建立了,因此該方法主要則是進行復制工做。以下:
final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
Node<K,V>[] nextTab; int sc;
if (tab != null && (f instanceof ForwardingNode) &&
(nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) {
int rs = resizeStamp(tab.length);
while (nextTab == nextTable && table == tab &&
(sc = sizeCtl) < 0) {
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || transferIndex <= 0)
break;
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
transfer(tab, nextTab);
break;
}
}
return nextTab;
}
return table;
}
複製代碼
在put操做是,若是發現鏈表結構中的元素超過了TREEIFY_THRESHOLD(默認爲8),則會把鏈表轉換爲紅黑樹,已便於提升查詢效率。以下:
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
複製代碼
調用treeifyBin方法用與將鏈表轉換爲紅黑樹。
private final void treeifyBin(Node<K,V>[] tab, int index) {
Node<K,V> b; int n, sc;
if (tab != null) {
if ((n = tab.length) < MIN_TREEIFY_CAPACITY)//若是table.length<64 就擴大一倍 返回
tryPresize(n << 1);
else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
synchronized (b) {
if (tabAt(tab, index) == b) {
TreeNode<K,V> hd = null, tl = null;
//構造了一個TreeBin對象 把全部Node節點包裝成TreeNode放進去
for (Node<K,V> e = b; e != null; e = e.next) {
TreeNode<K,V> p =
new TreeNode<K,V>(e.hash, e.key, e.val,
null, null);//這裏只是利用了TreeNode封裝 而沒有利用TreeNode的next域和parent域
if ((p.prev = tl) == null)
hd = p;
else
tl.next = p;
tl = p;
}
//在原來index的位置 用TreeBin替換掉原來的Node對象
setTabAt(tab, index, new TreeBin<K,V>(hd));
}
}
}
}
}
複製代碼
從上面源碼能夠看出,構建紅黑樹的過程是同步的,進入同步後過程以下: