logstash之mongodb-log

一、logstash6.5.3mongodb

配置收集mongodb的日誌:服務器

首先在mongodb服務器上部署filebeat收集日誌並上傳到logstash進行處理,而後上傳到ES。app

filebeat-conf:elasticsearch

- input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /data/log/mongod.log tags: ['db'] output.logstash: hosts: ["10.1.1.12:5044"]

這裏只給出主要的配置。測試

 

logstash-conf:fetch

input { beats { port => "5044" } } filter { if 'db' in [tags] { grok { match => ["message","%{TIMESTAMP_ISO8601:timestamp}\s+%{MONGO3_SEVERITY:severity}\s+%{MONGO3_COMPONENT:component}%{SPACE}(?:\[%{DATA:context}\])?\s+%{GREEDYDATA:body}"] remove_field => [ "message" ] remove_field => [ "beats_input_codec_plain_applied" ] } if [body] =~ "ms$" { grok { match => ["body","command\s+%{DATA:collection}\s+command\:\s+%{DATA:action}\s+.*\s+query\:\s+%{DATA:query}\s+planSummary+.*\s+%{NUMBER:spend_time:int}ms$"] } } if [body] =~ "aggregate" { grok { match => ["body","command\s+%{DATA:collection}\s+command\:\s+%{DATA:action}\s+.*\s+pipeline\:\s+%{DATA:pipeline}\s+keyUpdates+.*\s+%{NUMBER:spend_time:int}ms"] } } if [body] =~ "find" { grok { match => ["body","command\s+%{DATA:collection}\s+command\:\s+%{DATA:action}\s+.*\s+filter\:\s+%{DATA:filter}\s+planSummary+.*\s+%{NUMBER:spend_time:int}ms"] } } date { match => [ "timestamp", "ISO8601" ] remove_field => [ "timestamp" ] } } } output { if 'db' in [tags] { elasticsearch { hosts => "192.4.7.16:9200" index => "logstash-mongodb-slow-%{+YYYY-MM-dd}" } } }

 

grok須要先進行測試,kibana6.3之後提供了grok debugger:spa

  

 

測試效果:debug

相關文章
相關標籤/搜索