ElasticSearch 的一次非正常master脫離的調查 (轉 和我碰到的狀況如出一轍)

轉自 http://simonlei.iteye.com/blog/1669992java

一共有4個節點的cluster,其中es4 是master,某個時間忽然es1脫離了整個cluster,調查過程以下:node

 

[es@bigdata1 logs]$ date; ssh bd4 datessh

2012年 09月 03日 星期一 09:41:26 CSTjvm

2012年 09月 03日 星期一 09:41:00 CSTelasticsearch

 

es4比 es1 慢 26 秒,如下日誌時間修改成es1的時間.net

 

在es4的日誌中:日誌

 

 寫道
[2012-09-01 04:43:52,174][INFO ][cluster.service ] [es4] removed {[es1][VMFIvWAhRQuYoCcPIeedIQ][inet[/192.168.2.11:9300]],}, reason: zen-disco-node_failed([es1][VMFIvWAhRQuYoCcPIeedIQ][inet[/192.168.2.11:9300]]), reason failed to ping, tried [3] times, each with maximum [30s] timeout

 

 

這個時候,es4已經重試了3次,每次30s,也就是說90s之前,es1就已經出問題了,也就是42.22秒的時候。這個時間段es1發生了什麼?netty

 

 

 寫道
[2012-09-01 04:41:07,849][INFO ][monitor.jvm ] [es1] [gc][PS Scavenge][145683][1813] duration [5.9s], collections [1]/[6.5s], total [5.9s]/[46.1m], memory [21.8gb]->[15.7gb]/[23.3gb], all_pools {[Code Cache] [12.3mb]->[12.3mb]/[48mb]}{[PS Eden Space] [6.1gb]->[2.5mb]/[6.5gb]}{[PS Survivor Space] [289.5mb]->[214.6mb]/[820.4mb]}{[PS Old Gen] [15.3gb]->[15.5gb]/[16.2gb]}{[PS Perm Gen] [33.1mb]->[33.1mb]/[82mb]}
[2012-09-01 04:41:31,398][INFO ][monitor.jvm ] [es1] [gc][PS Scavenge][145701][1814] duration [5.8s], collections [1]/[6.5s], total [5.8s]/[46.2m], memory [22gb]->[15.8gb]/[23.3gb], all_pools {[Code Cache] [12.3mb]->[12.3mb]/[48mb]}{[PS Eden Space] [6.2gb]->[932.2kb]/[6.5gb]}{[PS Survivor Space] [214.6mb]->[259.8mb]/[815.8mb]}{[PS Old Gen] [15.5gb]->[15.6gb]/[16.2gb]}{[PS Perm Gen] [33.1mb]->[33.1mb]/[82mb]}
[2012-09-01 04:43:59,621][INFO ][monitor.jvm ] [es1] [gc][PS Scavenge][145745][1817] duration [8.9s], collections [1]/[1.6m], total [8.9s]/[46.5m], memory [22.8gb]->[2.1gb]/[23.3gb], all_pools {[Code Cache] [12.3mb]->[12.3mb]/[48mb]}{[PS Eden Space] [6.5gb]->[53.4kb]/[6.7gb]}{[PS Survivor Space] [318.4mb]->[0b]/[737.2mb]}{[PS Old Gen] [15.9gb]->[2.1gb]/[16.2gb]}{[PS Perm Gen] [33.1mb]->[33.1mb]/[82mb]}
[2012-09-01 04:43:59,621][WARN ][monitor.jvm ] [es1] [gc][PS MarkSweep][145745][14] duration [1.4m], collections [1]/[1.6m], total [1.4m]/[4.9m], memory [22.8gb]->[2.1gb]/[23.3gb], all_pools {[Code Cache] [12.3mb]->[12.3mb]/[48mb]}{[PS Eden Space] [6.5gb]->[53.4kb]/[6.7gb]}{[PS Survivor Space] [318.4mb]->[0b]/[737.2mb]}{[PS Old Gen] [15.9gb]->[2.1gb]/[16.2gb]}{[PS Perm Gen] [33.1mb]->[33.1mb]/[82mb]}
[2012-09-01 04:44:00,795][WARN ][transport.netty ] [es1] Exception caught on netty layer [[id: 0x3b3b2835, /192.168.2.14:55393 => /192.168.2.11:9300]]
java.io.IOException: 斷開的管道

 

 

 

這期間,發生了 5.9s、5.8s、8.9s、1.4m的gc動做,特別是最後一個gc,長達1.4分鐘,接近90s了。感受應該正是這個gc致使es1無響應,從而從cluster當中被踢出去了。blog

 

 

頗有意思的是,es1發現master es4不在了,它從新選舉es3作爲master,可是緊接着es3也失效了,日誌以下:rem

 

 寫道
[2012-09-01 04:44:03,756][INFO ][discovery.zen ] [es1] master_left [[es4][uAc5D7jkT_esrUTEBMmATQ][inet[/192.168.2.14:9300]]], reason [do not exists on master, act as master failure]
[2012-09-01 04:44:04,631][INFO ][cluster.service ] [es1] master {new [es3][F4mB4HYkTfu9umD8CSkcnw][inet[/192.168.2.13:9300]], previous [es4][uAc5D7jkT_esrUTEBMmATQ][inet[/192.168.2.14:9300]]}, removed {[es4][uAc5D7jkT_esrUTEBMmATQ][inet[/192.168.2.14:9300]],}, reason: zen-disco-master_failed ([es4][uAc5D7jkT_esrUTEBMmATQ][inet[/192.168.2.14:9300]])
[2012-09-01 04:44:06,782][INFO ][discovery.zen ] [es1] master_left [[es3][F4mB4HYkTfu9umD8CSkcnw][inet[/192.168.2.13:9300]]], reason [no longer master]
[2012-09-01 04:44:06,782][INFO ][cluster.service ] [es1] master {new [es1][VMFIvWAhRQuYoCcPIeedIQ][inet[/192.168.2.11:9300]], previous [es3][F4mB4HYkTfu9umD8CSkcnw][inet[/192.168.2.13:9300]]}, removed {[es3][F4mB4HYkTfu9umD8CSkcnw][inet[/192.168.2.13:9300]],}, reason: zen-disco-master_failed ([es3][F4mB4HYkTfu9umD8CSkcnw][inet[/192.168.2.13:9300]])

 

 

咱們看一下es3當時發生了什麼事情:

[es@bigdata1 logs]$ date; ssh bd3 date

2012年 09月 03日 星期一 09:51:14 CST

2012年 09月 03日 星期一 09:51:11 CST

 

它們只差3秒,es3的日誌以下:

 

 

 寫道
[2012-09-01 04:43:45,985][INFO ][cluster.service ] [es3] removed {[es1][VMFIvWAhRQuYoCcPIeedIQ][inet[/192.168.2.11:9300]],}, reason: zen-disco-receive(from master [[es4][uAc5D7jkT_esrUTEBMmATQ][inet[/192.168.2.14:9300]]])
[2012-09-01 04:44:12,631][DEBUG][action.admin.indices.status] [es3] [xabindex3][2], node[hLoNg9d0T1GYz1tqUSJwUA], [R], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@50d8a381]
org.elasticsearch.transport.RemoteTransportException: [es2][inet[/192.168.2.12:9300]][indices/status/s]

 

 

奇怪,04:44:06左右,es3啥都沒發生,感受就是es3沒理睬es1,所以es1只好又拋棄es3,把本身組建成獨立的master。

 

 

現象找到了,該怎麼解決了。兩個思路:

1. 把gc的時間儘可能再壓縮,哪怕多進行幾回gc,每次時間不要太長

2. 修改zen的配置,把fault detection的timeout時間和retry times 都增長。

第一步比較麻煩,仍是第二步比較簡單,先把retries修改成6試試看。

相關文章
相關標籤/搜索