ELK相關經常使用配置解析筆記

ELK相關經常使用配置解析

1、filebeat配置採集多個目錄的日誌

採集多個目錄日誌,本身的配置:php

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/nginx-1.14.0/logs/stars/star.access.log  #指明讀取的日誌文件的位置
  tags: ["nginx-access"]    #使用tag來區分不一樣的日誌

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /srv/www/runtime/logs/app.log
  tags: ["app"]    #使用tag來區分不一樣的日誌

2、filebeat解析多行日誌合併成一行

收集日誌的時候,若是是像本身的項目日誌,每每是多行trace日誌,這個時候,就須要配置多行匹配,filebeat提供了multiline ooptions用來解析多行日誌合併成一行。html

multiline options 主要是三個主要參數:nginx

multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after

//上面配置的意思是:不以時間格式開頭的行都合併到上一行的末尾(正則寫的很差,忽略忽略)

因此,最終,想要配置收集不一樣目錄的日誌,而且多行日誌匹配,具體配置爲:正則表達式

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/nginx-1.14.0/logs/stars/star.access.log
  tags: ["nginx-access"]

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /srv/www/runtime/logs/app.log
  tags: ["app"] 
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

關鍵在於使用filelds中的type進行不一樣日誌的區分session

filebeat服務重啓:systemctl restart filebeatapp

參考連接:elasticsearch

https://www.cnblogs.com/zhjh2...oop

https://blog.csdn.net/m0_3788...fetch

3、logstash配置grok的match,output等

logstash配置文件,對不一樣的日誌進行不一樣的filter匹配ui

#Logstash經過type字段進行判斷
input {
        beats {
                host => '0.0.0.0'
                port => 5401 
        }
}

filter {
  if "nginx-access" in [tags]{   #對nginx-access進行匹配
    grok {  #grok插件對日誌進行匹配,主要是正則表達式
          match => { "message" => "%{IPORHOST:remote_ip} - %{IPORHOST:host} - \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" - %{DATA:request_body} - %{INT:http_status} %{INT:body_bytes_sent} \"%{DATA:refer}\" \"%{DATA:user_agnet}\" \"%{DATA:x_forwarded_for}\" \"%{DATA:upstream_addr}\" \"response_location:%{DATA:response_location}\"" }
       }
  }
  else if "app" in [tags]{ 
    grok {
      match => {
        "message" => "%{DATESTAMP:log_time} \[%{IP:remote_ip}\]\[%{INT:uid}\]\[%{DATA:session_id}\]\[%{WORD:log_level}\]\[%{DATA:category}\] %{GREEDYDATA:message_text}"
      }
    }
}

output{
  if "nginx-access" in [tags]{
    elasticsearch {
      hosts => ["http://xxx.xxx.xxx.xx:9200"]
      index => "star_nginx_access_index_pattern-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "!@#j3C"
    }
  }
  else if "app" in [tags]{
    elasticsearch {
      hosts => ["http://xxx.xxx.xxx.xx:9200"]
      index => "star_app_index_pattern-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "!@#j3C"
    }
  }
}

主要是對grok的正則表達式進行匹配,爲了將日誌一條一條的匹配出來而後在Kibana中展現

重啓logstash:systemctl restart logstash

4、grok校驗配置的正確與否

使用grok在線調試校驗:http://grokdebug.herokuapp.com/

image-20200623190022167.png

5、kibana設置具體的index pattern

image-20200630180808995.png

image-20200630181004613.png

6、注意事項

  1. 修改配置以前備份原來的配置
  2. 修改配置以後記得重啓服務
  3. filebeat收集日誌時候,若是是多行日誌,能夠合併一行
相關文章
相關標籤/搜索