因爲近期項目運行時,發現內存有一個規律性的增加。node
首先排查的是localcache的問題, 爲了減小和redis的交互,對於一些熱數據,同時更新頻率也低,緩存週期內的數據延遲能夠接受,採用了從redis讀取到內存,進行二級緩存,緩存週期按數據量大小 爲30s到10mins不等, 適當的調小內存中的週期減小了一部份內存的開銷。golang
這種內存佔用畢竟佔少數。須要分析其餘佔用。web
用到的是 net/http/pprof 可在註冊http路由時自定義,參考以下redis
path | api |
---|---|
/debug/pprof | pprof.Index |
/debug/pprof/cmdline | pprof.Cmdline |
/debug/pprof/profile | pprof.Profile |
/debug/pprof/symbol | pprof.Symbol |
/debug/pprof/trace | pprof.Trace |
在運行服務的機器上,安裝分析工具(對pprof生成的文件進行分析)json
yum install graphvizapi
安裝成功後,啓動服務,執行下述命令緩存
go tool pprof ./mm-go localhost:1601/debug/pprof/heap函數
輸入 top10 查看內存佔用前10的函數工具
獲得的響應以下優化
Fetching profile from http://localhost:8080/debug/pprof/heap Saved profile in /root/pprof/pprof.mm-go.localhost:1601.alloc_objects.alloc_space.inuse_objects.inuse_space.006.pb.gz Entering interactive mode (type "help" for commands) (pprof) top10 12369.14kB of 12369.14kB total ( 100%) Dropped 69 nodes (cum <= 61.84kB) Showing top 10 nodes out of 23 (cum >= 512.02kB) flat flat% sum% cum cum% 10320.21kB 83.44% 83.44% 10320.21kB 83.44% mm.com/priceServer.Worker.Start.func1.1 ...
其中詳細的數據以下表格
這樣就能看到不一樣函數(func)佔用內存大小以及佔分配總內存的百分比了(flat)
flat | flat% | sum% | cum cum% | func |
---|---|---|---|---|
10320.21kB | 83.44% | 83.44% | 10320.21kB 83.44% | mm.com/priceServer.Worker.Start.func1.1 |
1024.41kB | 8.28% | 91.72% | 1024.41kB 8.28% | runtime.malg |
512.50kB | 4.14% | 95.86% | 512.50kB 4.14% | runtime.allocm |
512.02kB | 4.14% | 100% | 512.02kB 4.14% | runtime.rawstringtmp |
0 | 100% | 512.02kB 4.14% | encoding/json.(*decodeState).literal | |
0 | 100% | 512.02kB 4.14% | encoding/json.(*decodeState).literalStore | |
0 | 100% | 512.02kB 4.14% | encoding/json.(*decodeState).object | |
0 | 100% | 512.02kB 4.14% | encoding/json.(*decodeState).unmarshal | |
0 | 100% | 512.02kB 4.14% | encoding/json.(*decodeState).value | |
0 | 100% | 512.02kB 4.14% | encoding/json.Unmarshal |
根據提示查看內存佔用較大的函數,來進行有針對性的優化