筆記內容:搭建ELK日誌分析平臺——搭建kibana和logstash服務器
筆記日期:2018-03-03html
本文是上一篇 搭建ELK日誌分析平臺(上)—— ELK介紹及搭建 Elasticsearch 分佈式集羣 的後續。java
因爲上一篇中咱們已經配置過yum源,這裏就不用再配置了,直接yum安裝便可,安裝命令以下,在主節點上安裝:node
[root@master-node ~]# yum -y install kibana
若yum安裝的速度太慢,能夠直接下載rpm包來進行安裝:nginx
[root@master-node ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm [root@master-node ~]# rpm -ivh kibana-6.0.0-x86_64.rpm
安裝完成後,對kibana進行配置:redis
[root@master-node ~]# vim /etc/kibana/kibana.yml # 增長如下內容 server.port: 5601 # 配置kibana的端口 server.host: 192.168.77.128 # 配置監聽ip elasticsearch.url: "http://192.168.77.128:9200" # 配置es服務器的ip,若是是集羣則配置該集羣中主節點的ip logging.dest: /var/log/kibana.log # 配置kibana的日誌文件路徑,否則默認是messages裏記錄日誌
建立日誌文件:shell
[root@master-node ~]# touch /var/log/kibana.log; chmod 777 /var/log/kibana.log
啓動kibana服務,並檢查進程和監聽端口:json
[root@master-node ~]# systemctl start kibana [root@master-node ~]# ps aux |grep kibana kibana 3083 36.8 2.9 1118668 112352 ? Ssl 17:14 0:03 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml root 3095 0.0 0.0 112660 964 pts/0 S+ 17:14 0:00 grep --color=auto kibana [root@master-node ~]# netstat -lntp |grep 5601 tcp 0 0 192.168.77.128:5601 0.0.0.0:* LISTEN 3083/node [root@master-node ~]#
注:因爲kibana是使用node.js開發的,因此進程名稱爲nodebootstrap
而後在瀏覽器裏進行訪問,如:http://192.168.77.128:5601/ ,因爲咱們並無安裝x-pack,因此此時是沒有用戶名和密碼的,能夠直接訪問的:
vim
到此咱們的kibana就安裝完成了,很簡單,接下來就是安裝logstash,否則kibana是無法用的。windows
在192.168.77.130上安裝logstash,可是要注意的是目前logstash不支持JDK1.9。
直接yum安裝,安裝命令以下:
[root@data-node1 ~]# yum install -y logstash
若是yum源的速度太慢的話就下載rpm包來進行安裝:
[root@data-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm [root@data-node1 ~]# rpm -ivh logstash-6.0.0.rpm
安裝完以後,先不要啓動服務,先配置logstash收集syslog日誌:
[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf # 加入以下內容 input { # 定義日誌源 syslog { type => "system-syslog" # 定義類型 port => 10514 # 定義監聽端口 } } output { # 定義日誌輸出 stdout { codec => rubydebug # 將日誌輸出到當前的終端上顯示 } }
檢測配置文件是否有錯:
[root@data-node1 ~]# cd /usr/share/logstash/bin [root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK # 爲ok則表明配置文件沒有問題 [root@data-node1 /usr/share/logstash/bin]#
命令說明:
配置kibana服務器的ip以及配置的監聽端口:
[root@data-node1 ~]# vim /etc/rsyslog.conf #### RULES #### *.* @@192.168.77.130:10514
重啓rsyslog,讓配置生效:
[root@data-node1 ~]# systemctl restart rsyslog
指定配置文件,啓動logstash:
[root@data-node1 ~]# cd /usr/share/logstash/bin [root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties # 這時終端會停留在這裏,由於咱們在配置文件中定義的是將信息輸出到當前終端
打開新終端檢查一下10514端口是否已被監聽:
[root@data-node1 ~]# netstat -lntp |grep 10514 tcp6 0 0 :::10514 :::* LISTEN 4312/java [root@data-node1 ~]#
而後在別的機器ssh登陸到這臺機器上,測試一下有沒有日誌輸出:
[root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties { "severity" => 6, "pid" => "4575", "program" => "sshd", "message" => "Accepted password for root from 192.168.77.128 port 58336 ssh2\n", "type" => "system-syslog", "priority" => 86, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:27.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 10, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:27", "facility_label" => "security/authorization" } { "severity" => 6, "program" => "systemd", "message" => "Started Session 42 of user root.\n", "type" => "system-syslog", "priority" => 30, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:27.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 3, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:27", "facility_label" => "system" } { "severity" => 6, "program" => "systemd-logind", "message" => "New session 42 of user root.\n", "type" => "system-syslog", "priority" => 38, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:27.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 4, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:27", "facility_label" => "security/authorization" } { "severity" => 6, "pid" => "4575", "program" => "sshd", "message" => "pam_unix(sshd:session): session opened for user root by (uid=0)\n", "type" => "system-syslog", "priority" => 86, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:27.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 10, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:27", "facility_label" => "security/authorization" } { "severity" => 6, "program" => "systemd", "message" => "Starting Session 42 of user root.\n", "type" => "system-syslog", "priority" => 30, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:27.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 3, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:27", "facility_label" => "system" } { "severity" => 6, "pid" => "4575", "program" => "sshd", "message" => "Received disconnect from 192.168.77.128: 11: disconnected by user\n", "type" => "system-syslog", "priority" => 86, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:35.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 10, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:35", "facility_label" => "security/authorization" } { "severity" => 6, "pid" => "4575", "program" => "sshd", "message" => "pam_unix(sshd:session): session closed for user root\n", "type" => "system-syslog", "priority" => 86, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:35.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 10, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:35", "facility_label" => "security/authorization" } { "severity" => 6, "program" => "systemd-logind", "message" => "Removed session 42.\n", "type" => "system-syslog", "priority" => 38, "logsource" => "data-node1", "@timestamp" => 2018-03-03T18:12:35.000Z, "@version" => "1", "host" => "192.168.77.130", "facility" => 4, "severity_label" => "Informational", "timestamp" => "Mar 4 02:12:35", "facility_label" => "security/authorization" }
如上,能夠看到,終端中以JSON的格式打印了收集到的日誌,測試成功。
以上只是測試的配置,這一步咱們須要從新改一下配置文件,讓收集的日誌信息輸出到es服務器中,而不是當前終端:
[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf # 更改成以下內容 input { syslog { type => "system-syslog" port => 10514 } } output { elasticsearch { hosts => ["192.168.77.128:9200"] # 定義es服務器的ip index => "system-syslog-%{+YYYY.MM}" # 定義索引 } }
一樣的須要檢測配置文件有沒有錯:
[root@data-node1 ~]# cd /usr/share/logstash/bin [root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK [root@data-node1 /usr/share/logstash/bin]#
沒問題後,啓動logstash服務,並檢查進程以及監聽端口:
[root@data-node1 ~]# systemctl start logstash [root@data-node1 ~]# ps aux |grep logstash logstash 5364 285 20.1 3757012 376260 ? SNsl 04:36 0:34 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstas/runner.rb --path.settings /etc/logstash root 5400 0.0 0.0 112652 964 pts/0 S+ 04:36 0:00 grep --color=auto logstash
錯誤解決:
我這裏啓動logstash後,進程是正常存在的,可是9600以及10514端口卻沒有被監聽。因而查看logstash的日誌看看有沒有錯誤信息的輸出,可是發現沒有記錄日誌信息,那就只能轉而去查看messages的日誌,發現錯誤信息以下:
這是由於權限不夠,既然是權限不夠,那就設置權限便可:
[root@data-node1 ~]# chown logstash /var/log/logstash/logstash-plain.log [root@data-node1 ~]# ll !$ ll /var/log/logstash/logstash-plain.log -rw-r--r-- 1 logstash root 7597 Mar 4 04:35 /var/log/logstash/logstash-plain.log [root@data-node1 ~]# systemctl restart logstash
設置完權限重啓服務以後,發現仍是沒有監聽端口,查看logstash-plain.log文件記錄的錯誤日誌信息以下:
能夠看到,依舊是權限的問題,這是由於以前咱們以root的身份在終端啓動過logstash,因此產生的相關文件的屬組屬主都是root,一樣的,也是設置一下權限便可:
[root@data-node1 ~]# ll /var/lib/logstash/ total 4 drwxr-xr-x 2 root root 6 Mar 4 01:50 dead_letter_queue drwxr-xr-x 2 root root 6 Mar 4 01:50 queue -rw-r--r-- 1 root root 36 Mar 4 01:58 uuid [root@data-node1 ~]# chown -R logstash /var/lib/logstash/ [root@data-node1 ~]# systemctl restart logstash
此次就沒問題了,端口正常監聽了,這樣咱們的logstash服務就啓動成功了:
[root@data-node1 ~]# netstat -lntp |grep 9600 tcp6 0 0 127.0.0.1:9600 :::* LISTEN 9905/java [root@data-node1 ~]# netstat -lntp |grep 10514 tcp6 0 0 :::10514 :::* LISTEN 9905/java [root@data-node1 ~]#
可是能夠看到,logstash的監聽ip是127.0.0.1這個本地ip,本地ip沒法遠程通訊,因此須要修改一下配置文件,配置一下監聽的ip:
[root@data-node1 ~]# vim /etc/logstash/logstash.yml http.host: "192.168.77.130" [root@data-node1 ~]# systemctl restart logstash [root@data-node1 ~]# netstat -lntp |grep 9600 tcp6 0 0 192.168.77.130:9600 :::* LISTEN 10091/java [root@data-node1 ~]#
完成了logstash服務器的搭建以後,回到kibana服務器上查看日誌,執行如下命令能夠獲取索引信息:
[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana 6JfXc0gFSPOWq9gJI1ZX2g 1 1 1 0 6.9kb 3.4kb green open system-syslog-2018.03 bUXmEDskTh6fjGD3JgyHcA 5 1 61 0 591.7kb 296.7kb [root@master-node ~]#
如上,能夠看到,在logstash配置文件中定義的system-syslog索引成功獲取到了,證實配置沒問題,logstash與es通訊正常。
獲取指定索引詳細信息:
[root@master-node ~]# curl -XGET '192.168.77.128:9200/system-syslog-2018.03?pretty' { "system-syslog-2018.03" : { "aliases" : { }, "mappings" : { "system-syslog" : { "properties" : { "@timestamp" : { "type" : "date" }, "@version" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "facility" : { "type" : "long" }, "facility_label" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "host" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "logsource" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "message" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "pid" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "priority" : { "type" : "long" }, "program" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "severity" : { "type" : "long" }, "severity_label" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "timestamp" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "type" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } } } } }, "settings" : { "index" : { "creation_date" : "1520082481446", "number_of_shards" : "5", "number_of_replicas" : "1", "uuid" : "bUXmEDskTh6fjGD3JgyHcA", "version" : { "created" : "6020299" }, "provided_name" : "system-syslog-2018.03" } } } } [root@master-node ~]#
若是往後須要刪除索引的話,使用如下命令能夠刪除指定索引:
curl -XDELETE 'localhost:9200/system-syslog-2018.03'
es與logstash可以正常通訊後就能夠去配置kibana了,瀏覽器訪問192.168.77.128:5601,到kibana頁面上配置索引:
咱們也可使用通配符,進行批量匹配:
配置成功後點擊 「Discover」 :
進入 「Discover」 頁面後若是出現如下提示,則是表明沒法查找到日誌信息:
這種狀況通常是時間的問題,點擊右上角切換成查看當天的日誌信息:
這時應該就可以正常查看了:
若是仍是不行的話,就換幾個時間試試,換了幾個時間都不行的話,就在瀏覽器中直接訪問es服務器看看是否有反饋出信息:
http://192.168.77.128:9200/system-syslog-2018.03/_search?q=*
以下,這是正常返回信息的狀況,若是有問題的話是會返回error的:
若是es服務器正常返回信息,可是 「Discover」 頁面卻依舊顯示沒法查找到日誌信息的話,就使用另外一種方式,進入設置刪除掉索引:
從新添加索引,可是此次不要選擇 @timestampe 了:
可是這種方式只能看到數據,沒有可視化的柱狀圖:
其實這裏顯示的日誌數據就是 /var/log/messages 文件裏的數據,由於logstash裏配置的就是收集messages 文件裏的數據。
以上這就是如何使用logstash收集系統日誌,輸出到es服務器上,並在kibana的頁面上進行查看。
和收集syslog同樣,首先須要編輯配置文件,這一步在logstash服務器上完成:
[root@data-node1 ~]# vim /etc/logstash/conf.d/nginx.conf # 增長以下內容 input { file { # 指定一個文件做爲輸入源 path => "/tmp/elk_access.log" # 指定文件的路徑 start_position => "beginning" # 指定什麼時候開始收集 type => "nginx" # 定義日誌類型,可自定義 } } filter { # 配置過濾器 grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} # 定義日誌的輸出格式 } geoip { source => "clientip" } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.77.128:9200"] index => "nginx-test-%{+YYYY.MM.dd}" } }
一樣的編輯完配置文件以後,還須要檢測配置文件是否有錯:
[root@data-node1 ~]# cd /usr/share/logstash/bin [root@data-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK [root@data-node1 /usr/share/logstash/bin]#
檢查完畢以後,進入你的nginx虛擬主機配置文件所在的目錄中,新建一個虛擬主機配置文件:
[root@data-node1 ~]# cd /usr/local/nginx/conf/vhost/ [root@data-node1 /usr/local/nginx/conf/vhost]# vim elk.conf server { listen 80; server_name elk.test.com; location / { proxy_pass http://192.168.77.128:5601; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /tmp/elk_access.log main2; }
配置nginx的主配置文件,由於須要配置日誌格式,在 log_format combined_realip
那一行的下面增長如下內容:
[root@data-node1 ~]# vim /usr/local/nginx/conf/nginx.conf log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time';
完成以上配置文件的編輯以後,檢測配置文件有沒有錯誤,沒有的話就reload從新加載:
[root@data-node1 ~]# /usr/local/nginx/sbin/nginx -t nginx: [warn] conflicting server name "aaa.com" on 0.0.0.0:80, ignored nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful [root@data-node1 ~]# /usr/local/nginx/sbin/nginx -s reload [root@data-node1 ~]#
因爲咱們須要在windows下經過瀏覽器訪問咱們配置的 elk.test.com 這個域名,因此須要在windows下編輯它的hosts文件增長如下內容:
192.168.77.130 elk.test.com
這時在瀏覽器上就能夠經過這個域名進行訪問了:
訪問成功後,查看生成的日誌文件:
[root@data-node1 ~]# ls /tmp/elk_access.log /tmp/elk_access.log [root@data-node1 ~]# wc -l !$ wc -l /tmp/elk_access.log 45 /tmp/elk_access.log [root@data-node1 ~]#
如上,能夠看到,nginx的訪問日誌已經生成了。
重啓logstash服務,生成日誌的索引:
systemctl restart logstash
重啓完成後,在es服務器上檢查是否有nginx-test開頭的索引生成:
[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana 6JfXc0gFSPOWq9gJI1ZX2g 1 1 2 0 14.4kb 7.2kb green open system-syslog-2018.03 bUXmEDskTh6fjGD3JgyHcA 5 1 902 0 1.1mb 608.9kb green open nginx-test-2018.03.04 GdKYa6gBRke7mNgrh2PBUA 5 1 45 0 199kb 99.5kb [root@master-node ~]#
能夠看到,nginx-test索引已經生成了,那麼這時就能夠到kibana上配置該索引:
配置完成以後就能夠在 「Discover」 裏進行查看nginx的訪問日誌數據了:
以前也介紹過beats是ELK體系中新增的一個工具,它屬於一個輕量的日誌採集器,以上咱們使用的日誌採集工具是logstash,可是logstash佔用的資源比較大,沒有beats輕量,因此官方也推薦使用beats來做爲日誌採集工具。並且beats可擴展,支持自定義構建。
官方介紹:
在 192.168.77.134 上安裝filebeat,filebeat是beats體系中用於收集日誌信息的工具:
[root@data-node2 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm [root@data-node2 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm
安裝完成以後編輯配置文件:
[root@data-node2 ~]# vim /etc/filebeat/filebeat.yml # 增長或者更改成如下內容 filebeat.prospectors: - type: log #enabled: false 這一句要註釋掉 paths: - /var/log/messages # 指定須要收集的日誌文件的路徑 #output.elasticsearch: # 先將這幾句註釋掉 # Array of hosts to connect to. # hosts: ["localhost:9200"] output.console: # 指定在終端上輸出日誌信息 enable: true
配置完成以後,執行如下命令,看看是否有在終端中打印日誌數據,有打印則表明filebeat可以正常收集日誌數據:
[root@data-node2 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml
以上的配置只是爲了測試filebeat可否正常收集日誌數據,接下來咱們須要再次修改配置文件,將filebeat做爲一個服務啓動:
[root@data-node2 ~]# vim /etc/filebeat/filebeat.yml #output.console: 把這兩句註釋掉 # enable: true # 把這兩句的註釋去掉 output.elasticsearch: # Array of hosts to connect to. hosts: ["192.168.77.128:9200"] # 並配置es服務器的ip地址
修改完成後就能夠啓動filebeat服務了:
[root@data-node2 ~]# systemctl start filebeat [root@data-node2 ~]# ps axu |grep filebeat root 3021 0.3 2.3 296360 11288 ? Ssl 22:27 0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat root 3030 0.0 0.1 112660 960 pts/0 S+ 22:27 0:00 grep --color=auto filebeat
啓動成功後,到es服務器上查看索引,能夠看到新增了一個以filebeat-6.0.0開頭的索引,這就表明filesbeat和es可以正常通訊了:
[root@master-node ~]# curl '192.168.77.128:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open system-syslog-2018.03 bUXmEDskTh6fjGD3JgyHcA 5 1 73076 0 24.8mb 11.6mb green open nginx-test-2018.03.04 GdKYa6gBRke7mNgrh2PBUA 5 1 91 0 1mb 544.8kb green open .kibana 6JfXc0gFSPOWq9gJI1ZX2g 1 1 3 0 26.9kb 13.4kb green open filebeat-6.0.0-2018.03.04 MqQJMUNHS_OiVmO26NEWTw 3 1 66 0 64.5kb 39.1kb [root@master-node ~]#
es服務器可以正常獲取到索引後,就能夠到kibana上配置這個索引了:
以上這就是如何使用filebeat進行日誌的數據收集,能夠看到配置起來比logstash要簡單,並且佔用資源還少。
集中式日誌分析平臺 - ELK Stack - 安全解決方案 X-Pack:
http://www.jianshu.com/p/a49d93212eca
https://www.elastic.co/subscriptions
Elastic stack演進:
基於kafka和elasticsearch,linkedin構建實時日誌分析系統:
elastic stack 使用redis做爲日誌緩衝:
ELK+Filebeat+Kafka+ZooKeeper 構建海量日誌分析平臺:
關於elk+zookeeper+kafka 運維集中日誌管理: