最近線上頻繁收到CPU超過閾值的告警, 很明顯是哪裏出了問題. 因而排查了一番, 到最後找到罪魁禍首的時候, 忽然意識到此次是一次頗有意思的"非典型"的CPU的問題, 因此這裏特地記錄一下.java
爲啥說它是非典型呢, 由於在個人經驗裏, 典型的CPU飆升一般都是業務代碼裏面有死循環, 或是某個RPC性能太低阻塞了大量線程等等, 而此次的CPU問題倒是由GC引發的, 因吹斯汀linux
來看看排查過程apache
首先確定是先看哪些線程佔用CPU最高, 這個可使用top命令:數組
top -Hp $pid -b -n 1|sed -n "7,17p"
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
94349 app 20 0 11.2g 5.0g 12m S 15.0 32.1 215:03.69 java
94357 app 20 0 11.2g 5.0g 12m S 15.0 32.1 88:22.39 java
94352 app 20 0 11.2g 5.0g 12m S 13.1 32.1 215:05.71 java
94350 app 20 0 11.2g 5.0g 12m S 11.2 32.1 215:04.86 java
94351 app 20 0 11.2g 5.0g 12m S 11.2 32.1 215:04.99 java
94935 app 20 0 11.2g 5.0g 12m S 11.2 32.1 63:11.75 java
94926 app 20 0 11.2g 5.0g 12m S 9.4 32.1 63:10.58 java
94927 app 20 0 11.2g 5.0g 12m S 5.6 32.1 63:06.89 java
94932 app 20 0 11.2g 5.0g 12m S 5.6 32.1 63:12.65 java
94939 app 20 0 11.2g 5.0g 12m S 5.6 32.1 63:01.75 java
複製代碼
$pid是咱們對應的java進程的進程ID, sed -n "7,17p" 是取第7到第17行, 由於前7行都是top命令的頭部信息, 因此第7到第17行就是該線程下最耗CPU的前10個線程了.bash
其中第一列的"pid"就是JVM裏面對應的線程ID, 咱們只須要用線程ID在jstack裏面找到對應的線程就知道是誰在搞鬼了.app
不過需注意的是top命令中的PID是十進制的, 而jstack裏面的線程ID是十六進制的, 因此咱們還須要作一個工做, 就是把上面的PID轉成十六進制, 這裏我只轉換了最耗CPU的前3個:工具
[app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94349
1708d
[app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94357
17095
[app@linux-v-l-02:/app/tmp/]$printf '%x\n' 94352
17090
複製代碼
如今咱們已經知道消耗CPU的線程ID, 接着就要去看看這些線程ID對應的是什麼線程.性能
首先使用jstack打出JVM裏面的全部的線程信息:spa
[app@linux-v-l-02:/app/tmp/]jstack -l $pid >>/tmp/jstack.txt
複製代碼
值得一提的是, 因爲JVM裏面的線程一直在變化, 而TOP中的線程也一直在變化, 因此若是jstack命令和top命令是分開執行的話, 頗有可能二者的線程ID會對應不上. 所以top命令和jstack命令最好是寫好腳本一塊兒執行. 其實我就是寫腳本一塊兒執行的~命令行
接着, 看看1708d, 17095, 17090 這三個究竟是什麼線程:
[app@linux-v-l-02:/app/tmp/]$egrep "1708d|17095|17090" jstack.txt --color
"Gang worker#0 (Parallel GC Threads)" os_prio=0 tid=0x00007f4d4c023000 nid=0x1708d runnable
"Gang worker#3 (Parallel GC Threads)" os_prio=0 tid=0x00007f4d4c028800 nid=0x17090 runnable
"G1 Concurrent Refinement Thread#0" os_prio=0 tid=0x00007f4d4c032000 nid=0x17095 runnable
複製代碼
上面的nid就是對應的十六進制的線程ID. 從jstack能夠看出, 竟然最耗CPU的線程都是一些GC線程.
對JVM的FULL GC咱們是有監控的, 這個應用自從換了G1以後, 通常一週左右纔會發生一次FULL GC, 因此咱們一直都覺得咱們的JVM堆是很健康的, 但很種種跡象代表, 咱們的JVM確實是出問題了
GC日誌咱們是一直有打印, 打開一看, 果真是有很是多的GC pause, 以下
2019-08-12T20:12:23.002+0800: 501598.612: [GC pause (G1 Humongous Allocation) (young) (initial-mark), 0.0907586 secs]
[Parallel Time: 84.5 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501598615.0, Avg: 501598615.0, Max: 501598615.0, Diff: 0.1]
[Ext Root Scanning (ms): Min: 4.9, Avg: 5.0, Max: 5.0, Diff: 0.2, Sum: 19.8]
[Update RS (ms): Min: 76.6, Avg: 76.7, Max: 76.7, Diff: 0.1, Sum: 306.7]
[Processed Buffers: Min: 945, Avg: 967.0, Max: 1007, Diff: 62, Sum: 3868]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Object Copy (ms): Min: 2.4, Avg: 2.5, Max: 2.6, Diff: 0.2, Sum: 9.8]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
[GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.3]
[GC Worker Total (ms): Min: 84.2, Avg: 84.2, Max: 84.2, Diff: 0.1, Sum: 336.7]
[GC Worker End (ms): Min: 501598699.2, Avg: 501598699.2, Max: 501598699.2, Diff: 0.0]
[Code Root Fixup: 0.2 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 5.9 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 1.3 ms]
[Ref Enq: 0.1 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.1 ms]
[Humongous Reclaim: 0.7 ms]
[Free CSet: 0.2 ms]
[Eden: 230.0M(1968.0M)->0.0B(1970.0M) Survivors: 8192.0K->8192.0K Heap: 1693.6M(4096.0M)->1082.1M(4096.0M)]
[Times: user=0.34 sys=0.00, real=0.10 secs]
2019-08-12T20:12:23.094+0800: 501598.703: [GC concurrent-root-region-scan-start]
2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-root-region-scan-end, 0.0076353 secs]
2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-mark-start]
2019-08-12T20:12:23.634+0800: 501599.243: [GC concurrent-mark-end, 0.5323465 secs]
2019-08-12T20:12:23.639+0800: 501599.249: [GC remark 2019-08-12T20:12:23.639+0800: 501599.249: [Finalize Marking, 0.0019652 secs] 2019-08-12T20:12:23.641+0800: 501599.251: [GC ref-proc, 0.0027393 secs] 2019-08-12T20:12:23.644+0800: 501599.254: [Unloading, 0.0307159 secs], 0.0366784 secs]
[Times: user=0.13 sys=0.00, real=0.04 secs]
2019-08-12T20:12:23.682+0800: 501599.291: [GC cleanup 1245M->1226M(4096M), 0.0041762 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-start]
2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-end, 0.0000487 secs]
2019-08-12T20:12:30.022+0800: 501605.632: [GC pause (G1 Humongous Allocation) (young) (to-space exhausted), 0.3849037 secs]
[Parallel Time: 165.7 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501605635.2, Avg: 501605635.2, Max: 501605635.3, Diff: 0.1]
[Ext Root Scanning (ms): Min: 3.5, Avg: 3.8, Max: 4.4, Diff: 0.9, Sum: 15.2]
[Update RS (ms): Min: 135.5, Avg: 135.8, Max: 136.0, Diff: 0.5, Sum: 543.3]
[Processed Buffers: Min: 1641, Avg: 1702.2, Max: 1772, Diff: 131, Sum: 6809]
[Scan RS (ms): Min: 1.5, Avg: 1.6, Max: 1.6, Diff: 0.0, Sum: 6.2]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Object Copy (ms): Min: 24.1, Avg: 24.4, Max: 24.6, Diff: 0.4, Sum: 97.4]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
[Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[GC Worker Total (ms): Min: 165.6, Avg: 165.6, Max: 165.6, Diff: 0.0, Sum: 662.4]
[GC Worker End (ms): Min: 501605800.8, Avg: 501605800.9, Max: 501605800.9, Diff: 0.0]
[Code Root Fixup: 0.2 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.3 ms]
[Other: 218.7 ms]
[Evacuation Failure: 210.1 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 1.5 ms]
[Ref Enq: 0.1 ms]
[Redirty Cards: 0.3 ms]
[Humongous Register: 0.2 ms]
[Humongous Reclaim: 2.2 ms]
[Free CSet: 0.2 ms]
[Eden: 666.0M(1970.0M)->0.0B(204.0M) Survivors: 8192.0K->0.0B Heap: 2909.5M(4096.0M)->1712.4M(4096.0M)]
[Times: user=1.44 sys=0.00, real=0.39 secs]
2019-08-12T20:12:32.225+0800: 501607.834: [GC pause (G1 Evacuation Pause) (mixed), 0.0800708 secs]
[Parallel Time: 74.8 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501607835.5, Avg: 501607835.6, Max: 501607835.6, Diff: 0.1]
[Ext Root Scanning (ms): Min: 3.7, Avg: 4.0, Max: 4.4, Diff: 0.6, Sum: 16.2]
[Update RS (ms): Min: 67.8, Avg: 68.0, Max: 68.1, Diff: 0.3, Sum: 272.0]
[Processed Buffers: Min: 863, Avg: 899.8, Max: 938, Diff: 75, Sum: 3599]
複製代碼
G1的日誌有個很差的地方就是太多了, 看得眼花繚亂, 爲了方便描述, 我將上述GC日誌省略一些無心義的, 濃縮爲如下三段:
2019-08-12T20:12:23.002+0800: 501598.612: [GC pause (G1 Humongous Allocation) (young) (initial-mark), 0.0907586 secs]
[Parallel Time: 84.5 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501598615.0, Avg: 501598615.0, Max: 501598615.0, Diff: 0.1]
......
[Eden: 230.0M(1968.0M)->0.0B(1970.0M) Survivors: 8192.0K->8192.0K Heap: 1693.6M(4096.0M)->1082.1M(4096.0M)]
[Times: user=0.34 sys=0.00, real=0.10 secs]
2019-08-12T20:12:23.094+0800: 501598.703: [GC concurrent-root-region-scan-start]
2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-root-region-scan-end, 0.0076353 secs]
2019-08-12T20:12:23.101+0800: 501598.711: [GC concurrent-mark-start]
2019-08-12T20:12:23.634+0800: 501599.243: [GC concurrent-mark-end, 0.5323465 secs]
2019-08-12T20:12:23.639+0800: 501599.249: [GC remark 2019-08-12T20:12:23.639+0800: 501599.249: [Finalize Marking, 0.0019652 secs] 2019-08-12T20:12:23.641+0800: 501599.251: [GC ref-proc, 0.0027393 secs] 2019-08-12T20:12:23.644+0800: 501599.254: [Unloading, 0.0307159 secs], 0.0366784 secs]
[Times: user=0.13 sys=0.00, real=0.04 secs]
2019-08-12T20:12:23.682+0800: 501599.291: [GC cleanup 1245M->1226M(4096M), 0.0041762 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-start]
2019-08-12T20:12:23.687+0800: 501599.296: [GC concurrent-cleanup-end, 0.0000487 secs]
2019-08-12T20:12:30.022+0800: 501605.632: [GC pause (G1 Humongous Allocation) (young) (to-space exhausted), 0.3849037 secs]
[Parallel Time: 165.7 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501605635.2, Avg: 501605635.2, Max: 501605635.3, Diff: 0.1]
......
[Eden: 666.0M(1970.0M)->0.0B(204.0M) Survivors: 8192.0K->0.0B Heap: 2909.5M(4096.0M)->1712.4M(4096.0M)]
[Times: user=1.44 sys=0.00, real=0.39 secs]
2019-08-12T20:12:32.225+0800: 501607.834: [GC pause (G1 Evacuation Pause) (mixed), 0.0800708 secs]
[Parallel Time: 74.8 ms, GC Workers: 4]
[GC Worker Start (ms): Min: 501607835.5, Avg: 501607835.6, Max: 501607835.6, Diff: 0.1]
......
複製代碼
這段日誌看着就清晰多了, 首先從日誌裏面就能看出至少3個問題:
另外還有一個更嚴重的問題這裏是看不出來的, 就是相似的日誌很是頻繁! 高峯時期基本就是每2秒鐘打印出一次
經過上面的GC日誌, 咱們基本能夠判斷, 是應用程序不斷地new一些大對象致使的.
那什麼是大對象呢?
通常狀況就是局部變量的List, 一般能夠經過jmap -histo來查看堆內哪些對象佔用內存比較大, 實例數是多少
因此, 先經過jmap -histo $pid來看看堆棧裏面的對象分佈是如何的:
num #instances #bytes class name
--------------------------------------------
1: 1120 1032796420 [B
2: 838370 105246813 [C
3: 117631 55348463 [I
4: 352652 31033457 java.lang.reflect.Method
5: 665505 13978410 java.lang.String
6: 198567 12368412 [Ljava.lang.Object
7: 268941 9467465 java.util.HashMap$Node
8: 268941 8064567 java.util.treeMap$Entry
9: 268941 7845665 java.lang.reflect.Field
10: 268941 7754771 [Ljava.util.HashMap$Node
....
複製代碼
通常來講, 若是運氣好, 並且業務代碼有問題的, 一般能在jmap -histo裏面看到咱們業務涉及到的類名的.
可是很惋惜, 這裏沒有.
然而, 聰明的同窗可能一眼就看出了這個堆實際上是頗有問題的.
咱們看排行第一的[B (byte數組), 佔用了堆大小1032796420(1G左右), 而instances卻只有1120多個, 簡單地一相除, 竟然每一個對象能有1M大小!
很明顯, 這就是咱們要找的大對象了, 可是隻知道是一些byte數組, 並不知道是什麼數組, 因此還須要進一步排查
爲何1M就是大對象了呢? 因爲咱們的堆只有4G大小, 通常G1最大隻有2048個region, 所以每一個region的大小就是2M. G1在分配新生代的對象的內存空間時, 發現這個對象大於region size的一半, 就能夠認爲是大對象了,故而發生G1 Humongous Allocation
使用jmap -dump:format=b,file=head.hprof $pid 命令能夠把JVM的堆內容dump出來. dump出來後通常直接在命令行查看是看不出什麼的, 得下載到本地, 藉助一些分析工具來進行分析. 能夠有不少工具能夠分析, 例如jvisualvm, jprofile, MAT等等
這裏我用到的是jvisualvm, 打開jvisualvm==>文件==>裝入==>選中我剛剛下載下來的head.hprof, 而後再點擊"類", 再點擊按大小排序, 能夠獲得以下圖.
能夠看出, 堆裏面的byte數組實例數佔比只有0.9%, 內存大小佔比卻高達30%, 說明每個實例都是大對象.
接下來咱們雙擊第一行的"byte[]" 查看這些byte數組的明細. 能夠看不少的對象都是1048600byte, 也就是恰好1M, 不過這樣仍是看不出這個數組的內容, 因此咱們導出這個數組到本地, 以下圖:
導出後先用Sublime Text打開看一下, 以下圖
能夠看到,這個數組實際大小隻有1k左右(數前面的非0的數字的個數), 後面都是一些無心義的0值.雖然還沒法肯定這個數組是什麼代碼產生的, 可是至少能夠大概肯定了問題產生的緣由: 確定是某處代碼new了一個1048600大的byte數組,而實際場景中這個byte數組只須要1k左右便可,後面沒有填充的位都是默認的0值
最後證明一下咱們的猜想,簡單的用
String str= new String (bytearr, "UTF-8");
System.out.println("str = [" + str + "]");
複製代碼
把數組內容打印出來, 打印結果大概以下(我省略了大部份內容):
str = [p C0+org.apache.camel.impl.DefaultExchangeHolder�
exchangeIdinFaultFlagoutFaultFlag exceptioninBodyoutBody inHeaders
outHeaders
remoteOptRolefaceUrlversionfromIfromUserfailFaceUrl
.....
複製代碼
再用相關的關鍵詞搜索一下代碼,最後找出了真兇:
data = DefaultExchangeHolder.marshal(exchange, false);
baos = new ByteArrayOutputStream(1048600);// 真兇在這裏
hessianOut = new Hessian2Output(baos);
hessianOut.startMessage();
hessianOut.writeObject(data);
hessianOut.completeMessage();
hessianOut.flush();
exchangeData = baos.toByteArray();
複製代碼
ByteArrayOutputStream的構造方法
public ByteArrayOutputStream(int size) {
if (size < 0) {
throw new IllegalArgumentException("Negative initial size: "
+ size);
}
buf = new byte[size];
}
複製代碼
其實就是在利用Hessian序列化以前, new了一個1M大小的byte數組,致使了大量的大對象出現,而這個byte數組只是做爲一個buf來使用, 並且大小不夠時會自動增加(The buffer automatically grows as data is written to it.),因此根本不必設置那麼大.