ConcurrentHashMap內存溢出問題

寫在前面

  上週,同事寫了一段ConcurrentHashMap的測試代碼,說往map裏放了32個元素就內存溢出了,我大體看了一下他的代碼及運行的jvm參數,以爲很奇怪,因而就本身搗鼓了一下。首先上一段代碼:java

public class MapTest {

    public static void main(String[] args) {
        System.out.println("Before allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M");
        Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000);
        System.out.println("After allocate map, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M");

        int i = 0;
        try {
            while (i < 1000000) {
                System.out.println("Before put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M");
                map.put(String.valueOf(i), String.valueOf(i));
                System.out.println("After put the " + (i + 1) + " element, free memory is " + Runtime.getRuntime().freeMemory()/(1024*1024) + "M");
                i++;
            }
        } catch (Exception e) {
            e.printStackTrace();
        } catch (Throwable t) {
            t.printStackTrace();
        } finally {
            System.out.println("map size is " + map.size());
        }
    }

}

執行時加上jvm執行參數 -Xms512m -Xmx512m ,執行結果:數組

Before allocate map, free memory is 120M
After allocate map, free memory is 121M
Before put the 1 element, free memory is 121M
After put the 1 element, free memory is 121M
Before put the 2 element, free memory is 121M
After put the 2 element, free memory is 122M
Before put the 3 element, free memory is 122M
After put the 3 element, free memory is 122M
Before put the 4 element, free memory is 122M
After put the 4 element, free memory is 122M
Before put the 5 element, free memory is 122M
After put the 5 element, free memory is 114M
Before put the 6 element, free memory is 114M
java.lang.OutOfMemoryError: Java heap space
map size is 5
    at java.util.concurrent.ConcurrentHashMap.ensureSegment(Unknown Source)
    at java.util.concurrent.ConcurrentHashMap.put(Unknown Source)
    at com.j.u.c.tj.MapTest.main(MapTest.java:17)

  最開始的代碼是沒有加入一些日誌打印的,當時就很奇怪,爲何只往map裏放一個元素就報了OutOfMemoryError了。安全

  因而就加了上述打印日誌,發如今建立map的時候已經佔用了二百多兆的空間,而後往裏面put一個元素,put前都還有二百多兆,put時就報了OutOfMemoryError, 那就更奇怪了,初始化map的時候會佔用必定空間,能夠理解,可是隻是往裏面put一個很小的元素,爲何就會OutOfMemoryError呢?ssh

 

排查過程

 一、第一步,將第一段代碼中的Map<String, String> map = new ConcurrentHashMap<String, String>(2000000000);修改成Map<String, String> map = new ConcurrentHashMap<String, String>();,此次運行正常。(沒有找到問題根因,可是之後使用ConcurrentHashMap得注意:一、儘可能不初始化;二、若是須要初始化,儘可能給一個比較合適的值)jvm

   二、第二步,執行時加上jvm參數-Xms20124m -Xmx1024m 。發現仍是一樣出現問題。性能

   三、第三步,分析ConcurrentHashMap源代碼,首先,瞭解一下ConcurrentHashMap的結構,它是由多個Segment組成(每一個Segment擁有一把鎖,也是ConcurrentHashMap線程安全的保證,不是本文討論的重點),每一個Segment由一個HashEntry數組組成,出問題就在這個HashEntry上面測試

 

 

四、第四步,查看ConcurrentHashMap的初始化方法,能夠看出,初始化了Segment[0]的HashEntry數組,數組的長度爲cap值,這個值爲67108864this

    cap的計算過程(能夠針對於初始化過程進行調試)spa

      1)initialCapacity爲2,000,000,000,而MAXIMUM_CAPACITY的默認值(也即ConcurrentHashMap支持的最大值是1<<30,即230=1,073,741,824),initialCapacity的值大於MAXIMUM_CAPACITY,即initialCapacity=1,073,741,824prototype

      2)c的值計算爲 initialCapacity/ssize=67108864

      3)cap爲 大於等於c的第一個2的n次方數 也即67108864

public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (concurrencyLevel > MAX_SEGMENTS)
            concurrencyLevel = MAX_SEGMENTS;
        // Find power-of-two sizes best matching arguments
        int sshift = 0;
        int ssize = 1;
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        this.segmentShift = 32 - sshift;
        this.segmentMask = ssize - 1;
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        int c = initialCapacity / ssize;
        if (c * ssize < initialCapacity)
            ++c;
        int cap = MIN_SEGMENT_TABLE_CAPACITY;
        while (cap < c)
            cap <<= 1;
        // create segments and segments[0]
        Segment<K,V> s0 =
            new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                             (HashEntry<K,V>[])new HashEntry[cap]);
        Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
        UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
        this.segments = ss;
    }

 五、第五步,查看ConcurrentHashMap的put方法,能夠看出在put一個元素的時候,

    1)計算當前key的hash值,查看當前key所在的semgment有沒有初始化,若果有,則執行後續的put操做。

    2)若是沒有去執行ensureSement()方法,而在ensureSement()方法中又會初始化了一個HashEntry數組,數組的長度和第一個初始化的Segment的HashEntry的長度一致。

public V put(K key, V value) {
        Segment<K,V> s;
        if (value == null)
            throw new NullPointerException();
        int hash = hash(key);
        int j = (hash >>> segmentShift) & segmentMask;
        if ((s = (Segment<K,V>)UNSAFE.getObject          // nonvolatile; recheck
             (segments, (j << SSHIFT) + SBASE)) == null) //  in ensureSegment
            s = ensureSegment(j);
        return s.put(key, hash, value, false);
    }

    private Segment<K,V> ensureSegment(int k) {
        final Segment<K,V>[] ss = this.segments;
        long u = (k << SSHIFT) + SBASE; // raw offset
        Segment<K,V> seg;
        if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {
            Segment<K,V> proto = ss[0]; // use segment 0 as prototype
            int cap = proto.table.length;
            float lf = proto.loadFactor;
            int threshold = (int)(cap * lf);
            HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
            if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                == null) { // recheck
                Segment<K,V> s = new Segment<K,V>(lf, threshold, tab);
                while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                       == null) {
                    if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
                        break;
                }
            }
        }
        return seg;
    }

 

  六、這個時候能定位到是由於put一個元素的時候建立了一個長度爲67103684的HashEntry數組,而這個HashEntry數組會佔用67108864*4byte=256M,和上面的測試結果能對應起來。爲何數組會佔用這麼大的空間,不少同窗可能會有疑問,那來看一下數組的初始化  而數組初始化會在堆內存建立一個HashEntry引用的數組,且長度爲67103684,而每一個HashEntry引用(所引用的對象都爲null)都佔用4個字節。

 

問題總結

  一、ConcurrentHashMap初始化時要指定合理的初始化參數(固然本人作了一個小測試,指定初始化參數暫時沒有發現性能上的提高)

相關文章
相關標籤/搜索