目錄html
本文中全部的版本都是基於5.2.0,由於公司es(Elasticsearch)的環境是5.2.0。linux
關於Elasticsearch的安裝在以前的文章中已經寫過了,這裏再也不贅述。nginx
[elasticsearch及head插件安裝與配置](https://www.cnblogs.com/chaos-x/p/9446250.html)shell
Kibana是一個能把你es中數據進行可視化顯示的工具,包括實時統計和分析,基本上是零配置。vim
Kibana 5.2.0 linux 64-bit tar.gz瀏覽器
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.2.0-linux-x86_64.tar.gz
ruby
tar -zxvf kibana-5.2.0-linux-x86_64.tar.gz
服務器
vim kibana-5.2.0-linux-x86_64/config/kibana.yml
app
# Kibana is served by a back end server. This setting specifies the port to use. server.port: 30000 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "0.0.0.0" # The URL of the Elasticsearch instance to use for all your queries. # es的訪問地址和商品號 elasticsearch.url: "http://localhost:19200"
sh kibana-5.2.0-linux-x86_64/bin/kibana
elasticsearch
在瀏覽器中訪問Kibana的服務器加端口號就能夠看到Kibana的頁面了。
Logstash是日誌的收集工具,能夠對日誌進行收集,分析,解碼,過濾,輸出。通常使用Filebeat收集日誌到Logstash,由Logstash處理後保存到es。關於Filebeat後面再說。
https://artifacts.elastic.co/downloads/logstash/logstash-5.2.0.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.0.tar.gz
tar -zxvf logstash-5.2.0.tar.gz
sh logstash-5.2.0/bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
啓動後輸入'hello word',回車。會輸出以下結果
{ "message" => "Hello World", "@version" => "1", "@timestamp" => "2014-08-07T10:30:59.937Z", "host" => "raochenlindeMacBook-Air.local", }
Logstash 會給事件添加一些額外信息。最重要的就是 @timestamp,用來標記事件的發生時間。由於這個字段涉及到 Logstash 的內部流轉,因此必須是一個 joda 對象,若是你嘗試本身給一個字符串字段重命名爲 @timestamp 的話,Logstash 會直接報錯。因此,請使用 filters/date 插件 來管理這個特殊字段。
能夠把把配置寫到一個文件中,來啓動Logstash。
test.yml
文件cd logstash-5.2.0/config/ vim test.yml
input{ stdin{} } ouput{ stdout{ codec=>rubydebug } }
啓動
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/test.yml
能夠達步驟3的效果。
nginx-log.yml
文件cd logstash-5.2.0/config/ vim nginx-log.yml
input { file { # 指定一個文件做爲輸入源 path => "/usr/local/nginx/logs/access.log" # 指定文件的路徑 start_position => "beginning" # 指定什麼時候開始收集,這時設置從文件的最開頭收集 type => "nginx" # 定義日誌類型,可自定義 } } filter { # 配置過濾器 grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} # 定義日誌的輸出格式 } geoip { source => "clientip" } } output { # 標準輸出,是輸出到終端 stdout { codec => rubydebug } # 輸出到es elasticsearch { hosts => ["127.0.0.1:19200"] index => "nginx-log-%{+YYYY.MM.dd}" } }
sh logstash --path.settings /etc/logstash/ -f logstash-5.2.0/config/nginx-log.yml --config.test_and_exit
vim /usr/local/nginx/conf/nginx.conf
http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time'; }
#access_log logs/host.access.log main; # 增長以下內容, 日誌格式化main2要在上面定義,否則這裏沒法引用 access_log logs/elk_access.log main2; location / { root html; index index.html index.htm; # 增長以下三行內容,分別是攜帶訪問host,遠程地址和各層代理地址 proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
sh /usr/local/nginx/sbin/nginx -s reload
檢查
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/nginx-log.yml --config.test_and_exit
啓動
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/nginx-log.yml
當終端輸出以下內容就成功了
{ "@timestamp" => 2018-12-18T08:44:56.361Z, "plugin_instance" => "vda", "read" => 467266, "plugin" => "disk", "host" => "172.24.121.18", "@version" => "1", "collectd_type" => "disk_ops", "type" => "collectd", "write" => 12204609 } { "longterm" => 0.08, "@timestamp" => 2018-12-18T08:44:46.362Z, "plugin" => "load", "shortterm" => 0.06, "host" => "172.24.121.18", "@version" => "1", "collectd_type" => "load", "type" => "collectd", "midterm" => 0.04 }
進入Kibana頁面以下
在Kibana中,要分析展現數據時,要先建立Index Patterns
選擇index的時候能夠用通配符 ‘*’ 來把全部的nginx-log的訪問日誌分紅一個組。
Time-field name 是要指定一個日期格式的字段,以便於後面統計使用。
而後就能夠在這裏選擇配置好的Index patterns了。
其中x軸的時間就是建立Index Patterns的時候選擇的那個日期字段。
參考:
---恢復內容結束---
本文中全部的版本都是基於5.2.0,由於公司es(Elasticsearch)的環境是5.2.0。
關於Elasticsearch的安裝在以前的文章中已經寫過了,這裏再也不贅述。
[elasticsearch及head插件安裝與配置](https://www.cnblogs.com/chaos-x/p/9446250.html)
Kibana是一個能把你es中數據進行可視化顯示的工具,包括實時統計和分析,基本上是零配置。
Kibana 5.2.0 linux 64-bit tar.gz
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.2.0-linux-x86_64.tar.gz
tar -zxvf kibana-5.2.0-linux-x86_64.tar.gz
vim kibana-5.2.0-linux-x86_64/config/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use. server.port: 30000 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "0.0.0.0" # The URL of the Elasticsearch instance to use for all your queries. # es的訪問地址和商品號 elasticsearch.url: "http://localhost:19200"
sh kibana-5.2.0-linux-x86_64/bin/kibana
在瀏覽器中訪問Kibana的服務器加端口號就能夠看到Kibana的頁面了。
Logstash是日誌的收集工具,能夠對日誌進行收集,分析,解碼,過濾,輸出。通常使用Filebeat收集日誌到Logstash,由Logstash處理後保存到es。關於Filebeat後面再說。
https://artifacts.elastic.co/downloads/logstash/logstash-5.2.0.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.0.tar.gz
tar -zxvf logstash-5.2.0.tar.gz
sh logstash-5.2.0/bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
啓動後輸入'hello word',回車。會輸出以下結果
{ "message" => "Hello World", "@version" => "1", "@timestamp" => "2014-08-07T10:30:59.937Z", "host" => "raochenlindeMacBook-Air.local", }
Logstash 會給事件添加一些額外信息。最重要的就是 @timestamp,用來標記事件的發生時間。由於這個字段涉及到 Logstash 的內部流轉,因此必須是一個 joda 對象,若是你嘗試本身給一個字符串字段重命名爲 @timestamp 的話,Logstash 會直接報錯。因此,請使用 filters/date 插件 來管理這個特殊字段。
能夠把把配置寫到一個文件中,來啓動Logstash。
test.yml
文件cd logstash-5.2.0/config/ vim test.yml
input{ stdin{} } ouput{ stdout{ codec=>rubydebug } }
啓動
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/test.yml
能夠達步驟3的效果。
nginx-log.yml
文件cd logstash-5.2.0/config/ vim nginx-log.yml
input { file { # 指定一個文件做爲輸入源 path => "/usr/local/nginx/logs/access.log" # 指定文件的路徑 start_position => "beginning" # 指定什麼時候開始收集,這時設置從文件的最開頭收集 type => "nginx" # 定義日誌類型,可自定義 } } filter { # 配置過濾器 grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} # 定義日誌的輸出格式 } geoip { source => "clientip" } } output { # 標準輸出,是輸出到終端 stdout { codec => rubydebug } # 輸出到es elasticsearch { hosts => ["127.0.0.1:19200"] index => "nginx-log-%{+YYYY.MM.dd}" } }
sh logstash --path.settings /etc/logstash/ -f logstash-5.2.0/config/nginx-log.yml --config.test_and_exit
vim /usr/local/nginx/conf/nginx.conf
http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time'; }
#access_log logs/host.access.log main; # 增長以下內容, 日誌格式化main2要在上面定義,否則這裏沒法引用 access_log logs/elk_access.log main2; location / { root html; index index.html index.htm; # 增長以下三行內容,分別是攜帶訪問host,遠程地址和各層代理地址 proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
sh /usr/local/nginx/sbin/nginx -s reload
檢查
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/nginx-log.yml --config.test_and_exit
啓動
sh logstash-5.2.0/bin/logstash -f logstash-5.2.0/config/nginx-log.yml
當終端輸出以下內容就成功了
{ "@timestamp" => 2018-12-18T08:44:56.361Z, "plugin_instance" => "vda", "read" => 467266, "plugin" => "disk", "host" => "172.24.121.18", "@version" => "1", "collectd_type" => "disk_ops", "type" => "collectd", "write" => 12204609 } { "longterm" => 0.08, "@timestamp" => 2018-12-18T08:44:46.362Z, "plugin" => "load", "shortterm" => 0.06, "host" => "172.24.121.18", "@version" => "1", "collectd_type" => "load", "type" => "collectd", "midterm" => 0.04 }
進入Kibana頁面以下
在Kibana中,要分析展現數據時,要先建立Index Patterns
選擇index的時候能夠用通配符 ‘*’ 來把全部的nginx-log的訪問日誌分紅一個組。
Time-field name 是要指定一個日期格式的字段,以便於後面統計使用。
而後就能夠在這裏選擇配置好的Index patterns了。
其中x軸的時間就是建立Index Patterns的時候選擇的那個日期字段。
參考: