在執行elasticsearch查詢的時候,有些查詢會佔用大量的資源致使響應很慢,這個時候就須要ES對慢查詢進行監控。找到那些響應很慢的請求。ES的請求主要分爲搜索和索引,ES也分別提供了這兩種類型請求的慢查詢日誌。node
慢搜索日誌配置能夠記錄響應慢的搜索(查詢和獲取階段)並將其放到一個專門的日誌文件,這個配置只針對當前分片節點有效。linux
# vim /etc/elasticsearch/elasticsearch.yml # 記錄獲取慢日誌 index.search.slowlog.threshold.fetch.warn: 1s index.search.slowlog.threshold.fetch.info: 200ms index.search.slowlog.threshold.fetch.debug: 60ms index.search.slowlog.threshold.fetch.trace: 50ms # 記錄查詢慢日誌 index.search.slowlog.threshold.query.warn: 1s index.search.slowlog.threshold.query.debug: 500ms
日誌級別爲(warn、info、debug、trace)能夠經過給日誌分級來控制日誌的記錄。不是全部級別的日誌都須要記錄,多個級別記錄的好處是能夠根據級別減小日誌的數量,根據業務須要只關注重點的日誌。日誌記錄是在分片級別範圍完成的,意味着在特定分片中執行查詢請求,不包含整個能夠廣播到多個分片執行的搜索請求,分片級別的日誌記錄好處是關聯特定機器上的時機執行操做。vim
索引慢日誌相似於搜索慢日誌的功能。日誌文件以_index_indexing_slow_log_file.log 結尾,日誌和閥值能夠在elasticsearch.yml文件中配置。ruby
# vim /etc/elasticsearch/elasticsearch.yml index.search.slowlog.threshold.index.warn: 10s index.search.slowlog.threshold.index.info: 5s index.search.slowlog.threshold.index.debug: 2s index.search.slowlog.threshold.index.trace: 500ms index.search.slowlog.level: info index.search.slowlog.source: 1000
默認狀況下,ES會記錄_source 中前1000個字符到慢日誌中。能夠用index.search.slowlog.source進行修改。設置爲false或0徹底跳過日誌記錄源,設置爲true會記錄整個源(不管有多大)。bash
搜索慢日誌和索引慢日誌在elasticsearch.yml中開啓後,還須要在logging.yml中配置,配置以下:服務器
# vim /etc/elasticsearch/logging.yml index.search.slowlog: TRACE, index_search_slow_log_file index.indexing.slowlog: TRACE, index_indexing_slow_log_file additivity: index.search.slowlog: true index.indexing.slowlog: true deprecation: false index_search_slow_log_file: type: dailyRollingFile # 日誌類型,天天一個文件 file: ${path.logs}/${cluster.name}_index_search_slowlog.log # 文件命名格式 datePattern: "'.'yyyy-MM-dd" # 每日備份的後綴 layout: type: pattern conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" #記錄日誌的開頭格式 index_indexing_slow_log_file: type: dailyRollingFile file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log datePattern: "'.'yyyy-MM-dd" layout: type: pattern conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
這時一個索引級別的日誌,也就是說能夠獨立應用給索引:app
PUT /my_index/_settings { "index.search.slowlog.threshold.query.warn" : "10s", # 查詢慢於10秒輸出一個WARN日誌 "index.search.slowlog.threshold.fetch.debug": "500ms", # 獲取慢於500毫秒輸出一個DEBUG日誌 "index.indexing.slowlog.threshold.index.info": "5s" # 索引慢於5秒輸出一個INFO日誌 }
這裏僅提供一個logstash配置文件,你只須要修改此配置文件的慢查詢日誌路徑和ES服務器集羣信息,便可應用到你的ELK環境中。elasticsearch
input{ file{ start_position => "beginning" path=> ["填入你的ES慢日誌路徑"] sincedb_path => "./slowlogdb" } } filter { ruby{ code => "temp=event['message'].split(', '); t1= temp[0] common_attr=t1.split(']') event['time']=common_attr[0].split('[')[1] event['loglevel']=common_attr[1].split('[')[1] event['slowtype']=common_attr[2].split('[')[1] event['indexname']=common_attr[3].split('[')[1] t2= temp[1] time_attr=t2.split('[') event['took_millis']= time_attr[1].split(']')[0] t3= temp[2] t4= temp[3] t5= temp[4] t6= temp[5] shards_attr=t6.split('[') event['total_shards']= shards_attr[1].split(']')[0] t7= temp[6] t8= temp[7] event['search_type']= t5 event['message']= t7 event['extra_source']= t8 "} mutate{ convert => ["took_millis","integer"] #設置took_millis的類型爲integer類型 } mutate{ convert => ["total_shards","integer"] #設置total_shards的類型爲integer類型 } } output{ elasticsearch{ index => "es-slowlog-%{+YYYY-MM}" hosts=> [填入你的ES集羣主機列表] flush_size => 3000 } }
在logstash調試模式輸出ES慢日誌各字段含義說明:ide
{ # 慢查詢的語句 "message" => "source[{\"fields\":[\"_parent\",\"_source\"],\"query\":{\"bool\":{\"must\":[],\"must_not\":[],\"should\":[{\"match_all\":{}}]}},\"from\":0,\"size\":50,\"sort\":[],\"aggs\":{},\"version\":true}]", "@version" => "1", "@timestamp" => "2018-03-15T12:20:40.091Z", # 慢查詢日誌路徑 "path" => "/root/test.log", # 慢查詢主機名 "host" => "c7-node1.fblinux.com", # 慢查詢產生時間 "time" => "2018-03-15 11:26:30,318", # 慢查詢級別 "loglevel" => "INFO ", # 慢查詢類型 "slowtype" => "index.search.slowlog.query", # 索引名稱 "indexname" => "test-2018-03", # 慢查詢時間,單位毫秒 "took_millis" => 64, # 總shards數量 "total_shards" => 1188, "search_type" => "search_type[QUERY_THEN_FETCH]", "extra_source" => "extra_source[]," }