ELK是三個開源軟件的縮寫,分別表示:Elasticsearch , Logstash, Kibana , 它們都是開源軟件。新增了一個FileBeat,它是一個輕量級的日誌收集處理工具(Agent),Filebeat佔用資源少,適合於在各個服務器上搜集日誌後傳輸給Logstash,官方也推薦此工具。java
Elasticsearch 是個開源分佈式搜索引擎,提供蒐集、分析、存儲數據三大功能。它的特色有:分佈式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。node
Logstash 主要是用來日誌的蒐集、分析、過濾日誌的工具,支持大量的數據獲取方式。通常工做方式爲c/s架構,client端安裝在須要收集日誌的主機上,server端負責將收到的各節點日誌進行過濾、修改等操做在一併發往elasticsearch上去。linux
Kibana 也是一個開源和免費的工具,Kibana能夠爲 Logstash 和 ElasticSearch 提供的日誌分析友好的 Web 界面,能夠幫助彙總、分析和搜索重要數據日誌。nginx
Filebeat 隸屬於Beats。目前Beats包含四種工具:vim
官方文檔:
Filebeat:
https://www.elastic.co/cn/products/beats/filebeatwindows
Logstash:
https://www.elastic.co/cn/products/logstashcentos
Kibana:
https://www.elastic.co/cn/products/kibana瀏覽器
Elasticsearch:
https://www.elastic.co/cn/products/elasticsearchruby
elasticsearch中文社區:
https://elasticsearch.cn/服務器
通常咱們須要進行日誌分析場景:直接在日誌文件中 grep、awk 就能夠得到本身想要的信息。但在規模較大的場景中,此方法效率低下,面臨問題包括日誌量太大如何歸檔、文本搜索太慢怎麼辦、如何多維度查詢。須要集中化的日誌管理,全部服務器上的日誌收集彙總。常看法決思路是創建集中式日誌收集系統,將全部節點上的日誌統一收集,管理,訪問。
通常大型系統是一個分佈式部署的架構,不一樣的服務模塊部署在不一樣的服務器上,問題出現時,大部分狀況須要根據問題暴露的關鍵信息,定位到具體的服務器和服務模塊,構建一套集中式日誌系統,能夠提升定位問題的效率。
一個完整的集中式日誌系統,須要包含如下幾個主要特色:
ELK提供了一整套解決方案,而且都是開源軟件,之間互相配合使用,完美銜接,高效的知足了不少場合的應用。目前主流的一種日誌系統。
主機名 | 操做系統 | IP地址 | 服務名 |
---|---|---|---|
es | centos7.4 | 192.168.96.85 | elasticsearch 6.4.0、kibana 6.4.0、rsyslog |
nginx | centos7.4 | 192.168.96.60 | elasticsearch 6.4.0、logstash-6.4.0 |
httpd | centos7.4 | 192.168.96.86 | elasticsearch 6.4.0、filebeat-6.4 |
客戶機 | windows 10 | 192.168.96.2 | 網頁瀏覽器 |
以上全部服務器均關閉防火牆及SElinux功能
setenforece 0 systemctl stop firewalld
這裏分別使用了3種收集日誌的方法,官網建議選用filebeat,由於輕量、高效
vim /etc/hosts
192.168.96.85 es 192.168.96.86 httpd 192.168.96.60 nginx
# 導入key rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch # 建立es倉庫源 vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md # 安裝es軟件包 yum -y install elasticsearch
vim /etc/elasticsearch/elasticsearch.yml
17 cluster.name: es-server 23 node.name: master 24 node.master: true 25 node.data: true 58 network.host: 0.0.0.0 62 http.port: 9200 71 discovery.zen.ping.unicast.hosts: ["192.168.96.85", "192.168.96.86","192.168.96.60"]
scp /etc/elasticsearch/elasticsearch.yml httpd:/etc/elasticsearch/ scp /etc/elasticsearch/elasticsearch.yml nginx:/etc/elasticsearch/
# httpd服務器: vim /etc/elasticsearch/elasticsearch.yml node.name: httpd node.master: false # nginx服務器: vim /etc/elasticsearch/elasticsearch.yml node.name: nginx node.master: false
# es服務器:(先啓動master,在啓動其餘es) systemctl enable elasticsearch.service systemctl start elasticsearch.service # nginx服務器: systemctl enable elasticsearch.service systemctl start elasticsearch.service # httpd服務器: systemctl enable elasticsearch.service systemctl start elasticsearch.service
http://192.168.96.85:9200/_cluster/health?pretty
http://192.168.96.85:9200/_cluster/state?pretty
至此,es集羣已經部署完成了
yum -y install kibana
vim /etc/kibana/kibana.yml
2 server.port: 5601 7 server.host: "192.168.96.85" 28 elasticsearch.url: "http://192.168.96.85:9200" 96 logging.dest: /var/log/kibana.log
touch /var/log/kibana.log chmod 777 /var/log/kibana.log
systemctl enable kibana systemctl start kibana
[root@es ~]# netstat -tunlp | grep 5601 tcp 0 0 192.168.96.85:5601 0.0.0.0:* LISTEN 2597/node
yum install logstash -y
vim /etc/rsyslog.conf
第91行 *.* @@192.168.96.85:10514
systemctl restart rsyslog
vim /etc/logstash/conf.d/syslog.conf
input { syslog { type => "system-syslog" port => 10514 } } output { elasticsearch { hosts => ["192.168.96.85:9200"] //es服務器ip地址 index => "system-syslog-%{+YYYY.MM}" //索引 } }
./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm rpm -ivh filebeat-6.4.0-x86_64.rpm
vim /etc/filebeat/filebeat.yml
#註釋掉下一行 #enabled: false paths: - /var/log/messages //指定日誌路徑 output.elasticsearch: # Array of hosts to connect to. hosts: ["192.168.96.85:9200"] //指向es服務器
systemctl enable filebeat systemctl start filebeat
ps aux | grep filebeat
curl '192.168.96.85:9200/_cat/indices?v'
vim /etc/logstash/conf.d/nginx.conf
input { file { path => "/var/log/logstash/elk_access.log" start_position => "beginning" type => "nginx" } } filter { grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} } geoip { source => "clientip" } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.96.85:9200"] //es服務器ip index => "nginx-test-%{+YYYY.MM.dd}" } }
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
vim /etc/nginx/conf.d/elk.conf
server { listen 80; server_name www.test.com; location / { proxy_pass http://192.168.96.85:5601; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /var/log/logstash/elk_access.log main2; }
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time';
systemctl enable logstash systemctl start logstash
重啓nginx服務
systemctl restart nginx
開啓logstash收集nginx日誌
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf