ELK 分佈式日誌實戰

一.  ELK 分佈式日誌實戰介紹

  此實戰方案以 Elk 5.5.2 版本爲準,分佈式日誌將如下圖分佈進行安裝部署以及配置。html

  當Elk需監控應用日誌時,需在應用部署所在的服務器中,安裝Filebeat日誌採集工具,日誌採集工具經過配置,採集本地日誌文件,將日誌消息傳輸到Kafka集羣,linux

咱們可部署日誌中間服務器,安裝Logstash日誌採集工具,Logstash直接消費Kafka的日誌消息,並將日誌數據推送到Elasticsearch中,而且經過Kibana對日誌數據進行展現。json

二. Elasticsearch配置

1.Elasticsearch、Kibana安裝配置,可見本人另外一篇博文
https://www.cnblogs.com/woodylau/p/9474848.html
2.建立logstash日誌前,需先設置自動建立索引(根據第一步Elasticsearch、Kibana安裝成功後,點擊Kibana Devtools菜單項,輸入下文代碼執行)
PUT /_cluster/settings
{
    "persistent" : { "action": { "auto_create_index": "true" } } }

三. Filebeat 插件安裝以及配置

1.下載Filebeat插件 5.5.2 版本
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.2-linux-x86_64.tar.gz
2.解壓filebeat-5.5.2-linux-x86_64.tar.gz文件至/tools/elk/目錄下
1 tar -zxvf filebeat-5.5.2-linux-x86_64.tar.gz -C /tools/elk/
2 cd /tools/elk/
3 mv filebeat-5.5.2-linux-x86_64 filebeat-5.5.2
3.配置filebeat.yml文件
1 cd /tools/elk/filebeat-5.5.2
2 vi filebeat.yml
4.filebeat.yml 用如下文本內容覆蓋以前文本
 1 filebeat.prospectors:
 2 - input_type: log
 3   paths:
 4    # 應用 info日誌
 5     - /data/applog/app.info.log
 6   encoding: utf-8
 7   document_type: app-info
 8   #定義額外字段,方便logstash建立不一樣索引時所設
 9   fields:
10       type: app-info
11   #logstash讀取額外字段,必須設爲true
12   fields_under_root: true
13   scan_frequency: 10s
14   harvester_buffer_size: 16384
15   max_bytes: 10485760
16   tail_files: true
17 
18 - input_type: log
19   paths:
20     #應用錯誤日誌
21     - /data/applog/app.error.log
22   encoding: utf-8
23   document_type: app-error
24   fields:
25       type: app-error
26   fields_under_root: true
27   scan_frequency: 10s
28   harvester_buffer_size: 16384
29   max_bytes: 10485760
30   tail_files: true
31 
32 # filebeat讀取日誌數據錄入kafka集羣
33 output.kafka:
34   enabled: true
35   hosts: ["192.168.20.21:9092","192.168.20.22:9092","192.168.20.23:9092"]
36   topic: elk-%{[type]}
37   worker: 2
38   max_retries: 3
39   bulk_max_size: 2048
40   timeout: 30s
41   broker_timeout: 10s
42   channel_buffer_size: 256
43   keep_alive: 60
44   compression: gzip
45   max_message_bytes: 1000000
46   required_acks: 1
47   client_id: beats
48   partition.hash:
49      reachable_only: true
50 logging.to_files: true
5.啓動 filebeat 日誌採集工具
1 cd /tools/elk/filebeat-5.5.2
2 #後臺啓動 filebeat
3 nohup ./filebeat -c ./filebeat-kafka.yml &

四. Logstash 安裝配置

1. 下載Logstash 5.5.2 版本
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.tar.gz
2.解壓logstash-5.5.2.tar.gz文件至/tools/elk/目錄下
1 tar -zxvf logstash-5.5.2.tar.gz -C /tools/elk/
2 cd /tools/elk/
3 mv filebeat-5.5.2-linux-x86_64 filebeat-5.5.2
3.安裝x-pack監控插件(可選插件,如若elasticsearch安裝此插件,則logstash也必須安裝)
./logstash-plugin install x-pack
4.編輯logstash_kafka.conf 文件
1 cd /tools/elk/logstash-5.5.2/config
2 vi logstash_kafka.conf
5.配置 logstash_kafka.conf 
input {
   kafka  {
      codec => "json"
      topics_pattern => "elk-.*"
      bootstrap_servers => "192.168.20.21:9092,192.168.20.22:9092,192.168.20.23:9092"
      auto_offset_reset => "latest"
      group_id => "logstash-g1"
    }
}

filter {
   #當非業務字段時,無traceId則移除
   if ([message] =~ "traceId=null") {
      drop {}
   }
}

output {
    elasticsearch {                                  
      #Logstash輸出到elasticsearch
       hosts => ["192.168.20.21:9200","192.168.20.22:9200","192.168.20.23:9200"]  
       # type爲filebeat額外字段值
       index => "logstash-%{type}-%{+YYYY.MM.dd}"
       document_type => "%{type}"
       flush_size => 20000
       idle_flush_time => 10
       sniffing => true
       template_overwrite => false
       # 當elasticsearch安裝了x-pack插件,則需配置用戶名密碼
       user => "elastic"
       password => "elastic"
    }

}
6.啓動 logstash 日誌採集工具
1 cd /tools/elk/logstash-5.5.2

#後臺啓動 logstash 2 nohup /tools/elk/logstash-5.5.2/bin/logstash -f /tools/elk/logstash-5.5.2/config/logstash_kafka.conf &

五. 最終查看ELK安裝配置結果

1.訪問 Kibana, http://localhost:5601,點擊 Discover菜單,配置索引表達式,輸入 logstash-*,點擊下圖藍色按鈕,則建立查看Logstash採集的應用日誌

 

相關文章
相關標籤/搜索