採集多個目錄日誌,本身的配置:php
- type: log enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /opt/nginx-1.14.0/logs/stars/star.access.log #指明讀取的日誌文件的位置 tags: ["nginx-access"] #使用tag來區分不一樣的日誌 - type: log enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /srv/www/runtime/logs/app.log tags: ["app"] #使用tag來區分不一樣的日誌
收集日誌的時候,若是是像本身的項目日誌,每每是多行trace日誌,這個時候,就須要配置多行匹配,filebeat提供了multiline ooptions用來解析多行日誌合併成一行。html
multiline options 主要是三個主要參數:nginx
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after //上面配置的意思是:不以時間格式開頭的行都合併到上一行的末尾(正則寫的很差,忽略忽略)
因此,最終,想要配置收集不一樣目錄的日誌,而且多行日誌匹配,具體配置爲:正則表達式
- type: log enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /opt/nginx-1.14.0/logs/stars/star.access.log tags: ["nginx-access"] - type: log enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /srv/www/runtime/logs/app.log tags: ["app"] multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' multiline.negate: true multiline.match: after
關鍵在於使用filelds中的type進行不一樣日誌的區分session
filebeat服務重啓:systemctl restart filebeatapp
參考連接:elasticsearch
https://www.cnblogs.com/zhjh2...oop
https://blog.csdn.net/m0_3788...fetch
logstash配置文件,對不一樣的日誌進行不一樣的filter匹配ui
#Logstash經過type字段進行判斷 input { beats { host => '0.0.0.0' port => 5401 } } filter { if "nginx-access" in [tags]{ #對nginx-access進行匹配 grok { #grok插件對日誌進行匹配,主要是正則表達式 match => { "message" => "%{IPORHOST:remote_ip} - %{IPORHOST:host} - \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" - %{DATA:request_body} - %{INT:http_status} %{INT:body_bytes_sent} \"%{DATA:refer}\" \"%{DATA:user_agnet}\" \"%{DATA:x_forwarded_for}\" \"%{DATA:upstream_addr}\" \"response_location:%{DATA:response_location}\"" } } } else if "app" in [tags]{ grok { match => { "message" => "%{DATESTAMP:log_time} \[%{IP:remote_ip}\]\[%{INT:uid}\]\[%{DATA:session_id}\]\[%{WORD:log_level}\]\[%{DATA:category}\] %{GREEDYDATA:message_text}" } } } output{ if "nginx-access" in [tags]{ elasticsearch { hosts => ["http://xxx.xxx.xxx.xx:9200"] index => "star_nginx_access_index_pattern-%{+YYYY.MM.dd}" user => "elastic" password => "!@#j3C" } } else if "app" in [tags]{ elasticsearch { hosts => ["http://xxx.xxx.xxx.xx:9200"] index => "star_app_index_pattern-%{+YYYY.MM.dd}" user => "elastic" password => "!@#j3C" } } }
主要是對grok的正則表達式進行匹配,爲了將日誌一條一條的匹配出來而後在Kibana中展現
重啓logstash:systemctl restart logstash
使用grok在線調試校驗:http://grokdebug.herokuapp.com/