OS:CentOS 7.4 Filebeat: 6.3.2 Logstash: 6.3.2 Elasticsearch 6.3.2 Kibana: 6.3.2
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-x86_64.rpm yum localinstall filebeat-6.3.2-x86_64.rpm
這裏以nginx日誌爲例做爲演示html
配置文件:/etc/filebeat/filebeat.yml前端
filebeat.prospectors: - input_type: log #輸入類型爲log paths: #日誌路徑 - /usr/local/nginx/logs/*.access.log document_type: ngx-access-log #日誌類型 - input_type: log paths: - /usr/local/nginx/logs/*.error.log document_type: ngx-error-log output.logstash: #輸出到Logstash(也能夠輸出到其餘,如elasticsearch) hosts: ["10.1.4.171:1007"]
systemctl enable filebeat systemctl start filebeat
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.rpm yum localinstall logstash-6.3.2.rpm
Logstash須要自定義,自定義配置文件目錄是/etc/logstash/conf.dnginx
這裏新建一個filebeat.conf配置文件git
/etc/logstash/conf.d/filebeat.confgithub
input { #輸入方式是beats beats { port => "1007" #監聽1007端口(自定義端口) } } filter { if [type] == "ngx-access-log" { #對日誌類型爲ngx-access-log進行處理。日誌類型爲filebeat配置定義 grok { patterns_dir => "/usr/local/logstash/patterns" match => { #對傳過來的message字段作拆分,分割成多個易讀字段 message => "%{IPV4:remote_addr}\|%{IPV4:FormaxRealIP}\|%{POSINT:server_port}\|%{GREEDYDATA:scheme}\|%{IPORHOST:http_host}\|%{HTTPDATE:time_local}\|%{HTTPMETHOD:request_method}\|%{URIPATHPARAM:request_uri}\|%{GREEDYDATA:server_protocol}\|%{NUMBER:status}\|%{NUMBER:body_bytes_sent}\|%{GREEDYDATA:http_referer}\|%{GREEDYDATA:user_agent}\|%{GREEDYDATA:http_x_forwarded_for}\|%{HOSTPORT:upstream_addr}\|%{BASE16FLOAT:upstream_response_time}\|%{BASE16FLOAT:request_time}\|%{GREEDYDATA:cookie_formax_preview}" } remove_field => ["message"] #已經將message字段拆分,能夠將message字段刪除 } date { match => [ "time_local", "dd/MMM/yyyy:HH:mm:ss Z"] #nginx日誌中的時間替換@timestamp remove_field => ["time_local"] #刪除nginx日誌時間字段 } mutate { rename => ["http_host", "host"] #nginx日誌中http_host字段,替換host字段 } } } output { elasticsearch { # 輸出到elasticsearch hosts => ["127.0.0.1:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" #輸出索引格式 } }
systemctl enable logstash systemctl start logstash
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.2.rpm yum localinstall elasticsearch-6.3.2.rpm
/etc/elasticsearch/elasticsearch.ymlnpm
path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 0.0.0.0 http.port: 9200 #elasticsearch-head須要下列配置 http.cors.enabled: true http.cors.allow-origin: "*"
systemctl enable elasticsearch systemctl start elasticsearch
elasticsearch-head用於鏈接elasticsearch,並提供一個前端管理頁面cookie
git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install npm run start open http://localhost:9100/
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.2-x86_64.rpm yum localinstall kibana-6.3.2-x86_64.rpm
默認配置就好架構
nohup /usr/share/kibana/bin/kibana &> /usr/share/kibana/logs/kibana.stdout &
安裝nginxcors
yum install nginx
配置
/etc/nginx/conf.d/kibana.confelasticsearch
server { listen 80; server_name test.kibana.com; root html; access_log /var/log/nginx/test.kibana.com.access.log main; error_log /var/log/nginx/test.kibana.com.error.log; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_connect_timeout 10; proxy_read_timeout 30; proxy_send_timeout 180; proxy_ignore_client_abort on; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_set_header Host $host; location /monitor { default_type text/plain; return 200 "OK"; } location /echoip { default_type text/plain; return 200 $http_x_forwarded_for,$remote_addr; } location / { expires off; if ($server_port = "80") { proxy_pass http://127.0.0.1:5601; } proxy_pass https://127.0.0.1:5601; } }
啓動
systemctl enable nginx systemctl start nginx
本文只是簡單介紹了一下ELK+Filebeat日誌分析系統的安裝配置,以及一個簡單的nginx日誌處理過程。要想更細緻的學習ELK體系,能夠看ELKstack 中文指南。雖然該書以ELK5版本進行講解,ELK6也能夠看。