在大量的解析日誌並寫入elasticsearch,在後端節點數據數量及磁盤性能等影響下,es不響應node
問題描述:後端
[2018-04-12T17:02:16,861][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://x.x.x.x:9200/, :error_message=>"Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool
解決辦法:緩存
1)you should run logstash separate from ES cluster as both can use a lot of cpu resources. //lg與es分離,禁止放到一臺機器上,lg解析消耗大量的CPU
2)You should also have more than one node for ES cluster in which case logstash can use the other ES nodes when one node is not accessible //增長es數量-data節點的elasticsearch
03)緩存日誌隊列換成kafka,控制消費隊列,讓elasticsearch穩定寫入性能