上週,同事寫了一段ConcurrentHashMap的測試代碼,說往map裏放了32個元素就內存溢出了,我大體看了一下他的代碼及運行的jvm參數,以爲很奇怪,因而就本身搗鼓了一下。首先上一段代碼:html
1 public class MapTest { 2 3 public static void main(String[] args) { 4 System.out.println("Before allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); 5 Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000); 6 System.out.println("After allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); 7 8 int i = 0; 9 try { 10 while (i < 1000000) { 11 System.out.println("Before put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); 12 map.put(String.valueOf(i), String.valueOf(i)); 13 System.out.println("After put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M"); 14 i++; 15 } 16 } catch (Exception e) { 17 e.printStackTrace(); 18 } catch (Throwable t) { 19 t.printStackTrace(); 20 } finally { 21 System.out.println("map size is " + map.size()); 22 } 23 } 24 25 }
執行時加上jvm執行參數 -Xms512m -Xmx512m ,執行結果:java
Before allocate map, free memory is 483M After allocate map, free memory is 227M Before put 0 element, free memory is 227M java.lang.OutOfMemoryError: Java heap space at java.util.concurrent.ConcurrentHashMap.ensureSegment(ConcurrentHashMap.java:746) at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1129) at com.test.MapTest.main(MapTest.java:21) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) map size is 0
最開始的代碼是沒有加入一些日誌打印的,當時就很奇怪,爲何只往map裏放一個元素就報了OutOfMemoryError了。數組
因而就加了上述打印日誌,發如今建立map的時候已經佔用了二百多兆的空間,而後往裏面put一個元素,put前都還有二百多兆,put時就報了OutOfMemoryError, 那就更奇怪了,初始化map的時候會佔用必定空間,能夠理解,可是隻是往裏面put一個很小的元素,爲何就會OutOfMemoryError呢?安全
一、第一步,將第一段代碼中的Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000);修改成Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000);,此次運行正常。(沒有找到問題根因,可是之後使用ConcurrentHashMap得注意:一、儘可能不初始化;二、若是須要初始化,儘可能給一個比較合適的值)app
二、第二步,執行時加上jvm參數-Xms512m -Xmx512m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=D:\gc.hprof,後面兩個參數主要做用是當程序內存溢出時會打印堆快照文件,已便於咱們來分析內存。生成gc.hprof文件後,採用Eclipse Memory Analyzer來分析,結果以下圖,可見ConcurrentHashMap中的HashEntry佔用了大量內存(關於Eclipse Memory Analyzer分析待補充)。此時只是找到了哪一個對象佔用空間,可是爲何爲出現這種狀況不得而知。ssh
三、第三步,分析ConcurrentHashMap源代碼,首先,瞭解一下ConcurrentHashMap的結構,它是由多個Segment組成(每一個Segment擁有一把鎖,也是ConcurrentHashMap線程安全的保證,不是本文討論的重點),每一個Segment由一個HashEntry數組組成,出問題就在這個HashEntry上面。jvm
四、第四步,查看ConcurrentHashMap的初始化方法,能夠看出第27行,初始化了Segment[0]的HashEntry數組,數組的長度爲cap值,這個值爲67108864性能
cap的計算過程(能夠針對於初始化過程進行調試)測試
1)initialCapacity爲2,000,000,000,而MAXIMUM_CAPACITY的默認值(也即ConcurrentHashMap支持的最大值是1<<30,即230=1,073,741,824),initialCapacity的值大於MAXIMUM_CAPACITY,即initialCapacity=1,073,741,824this
2)c的值計算爲 initialCapacity/ssize=67108864
3)cap爲 大於等於c的第一個2的n次方數 也即67108864
1 public ConcurrentHashMap(int initialCapacity, 2 float loadFactor, int concurrencyLevel) { 3 if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0) 4 throw new IllegalArgumentException(); 5 if (concurrencyLevel > MAX_SEGMENTS) 6 concurrencyLevel = MAX_SEGMENTS; 7 // Find power-of-two sizes best matching arguments 8 int sshift = 0; 9 int ssize = 1; 10 while (ssize < concurrencyLevel) { 11 ++sshift; 12 ssize <<= 1; 13 } 14 this.segmentShift = 32 - sshift; 15 this.segmentMask = ssize - 1; 16 if (initialCapacity > MAXIMUM_CAPACITY) 17 initialCapacity = MAXIMUM_CAPACITY; 18 int c = initialCapacity / ssize; 19 if (c * ssize < initialCapacity) 20 ++c; 21 int cap = MIN_SEGMENT_TABLE_CAPACITY; 22 while (cap < c) 23 cap <<= 1; 24 // create segments and segments[0] 25 Segment<K,V> s0 = 26 new Segment<K,V>(loadFactor, (int)(cap * loadFactor), 27 (HashEntry<K,V>[])new HashEntry[cap]); 28 Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize]; 29 UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0] 30 this.segments = ss; 31 }
五、第五步,查看ConcurrentHashMap的put方法,能夠看出在put一個元素的時候,
1)計算當前key的hash值,查看當前key所在的semgment有沒有初始化,若果有,則執行後續的put操做。
2)若是沒有去執行ensureSement()方法,而在ensureSement()方法中又會初始化了一個HashEntry數組,數組的長度和第一個初始化的Segment的HashEntry的長度一致,能夠從代碼行的1八、19行看出。
1 public V put(K key, V value) { 2 Segment<K,V> s; 3 if (value == null) 4 throw new NullPointerException(); 5 int hash = hash(key); 6 int j = (hash >>> segmentShift) & segmentMask; 7 if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck 8 (segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment 9 s = ensureSegment(j); 10 return s.put(key, hash, value, false); 11 } 12 13 private Segment<K,V> ensureSegment(int k) { 14 final Segment<K,V>[] ss = this.segments; 15 long u = (k << SSHIFT) + SBASE; // raw offset 16 Segment<K,V> seg; 17 if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { 18 Segment<K,V> proto = ss[0]; // use segment 0 as prototype 19 int cap = proto.table.length; 20 float lf = proto.loadFactor; 21 int threshold = (int)(cap * lf); 22 HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap]; 23 if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) 24 == null) { // recheck 25 Segment<K,V> s = new Segment<K,V>(lf, threshold, tab); 26 while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) 27 == null) { 28 if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s)) 29 break; 30 } 31 } 32 } 33 return seg; 34 }
六、這個時候能定位到是由於put一個元素的時候建立了一個長度爲67103684的HashEntry數組,而這個HashEntry數組會佔用67108864*4byte=256M,和上面的測試結果能對應起來。爲何數組會佔用這麼大的空間,不少同窗可能會有疑問,那來看一下數組的初始化 而數組初始化會在堆內存建立一個HashEntry引用的數組,且長度爲67103684,而每一個HashEntry引用(所引用的對象都爲null)都佔用4個字節(參見http://www.importnew.com/18878.html)。
一、ConcurrentHashMap初始化時要指定合理的初始化參數(固然本人作了一個小測試,指定初始化參數暫時沒有發現性能上的提高,因此待各位看客本身來評估)
這篇博客是本人的第一篇博客,固然有不少不足之處,若是有紕漏歡迎各位大神指出,謝謝各位!