ELK 之三:Kibana 使用與Tomcat、Nginx 日誌格式處理

一:kibana安裝:html

  kibana主要是搜索elasticsearch的數據,並進行數據可視化的展示,新版使用nodejs。java

一、下載地址:node

https://www.elastic.co/downloads/kibana

二、解壓安裝:python

[root@node6 local]# tar xvf kibana-4.1.1-linux-x64.tar.gz 
[root@node6 local]# mv kibana-4.1.1-linux-x64 kibana
[root@node6 ~]# cd /usr/local/kibana/
[root@node6 kibana]# ls
bin config LICENSE.txt node plugins README.txt src

三、編輯配置文件:linux

[root@node6 kibana]# cd config/
[root@node6 config]# ls
kibana.yml
[root@node6 config]# vim kibana.yml
elasticsearch_url: "http://192.168.10.206:9200"

四、直接啓動:android

[root@node6 kibana]# bin/kibana 
{"name":"Kibana","hostname":"node6.a.com","pid":3942,"level":30,"msg":"No existing kibana index found","time":"2016-04-12T12:20:50.069Z","v":0}
{"name":"Kibana","hostname":"node6.a.com","pid":3942,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2016-04-12T12:20:50.096Z","v":0}

 五、驗證啓動:nginx

[root@node6 ~]# ps -ef | grep  kibana
root       3942   3745  3 20:20 pts/2    00:00:01 bin/../node/bin/node bin/../src/bin/kibana.js
root       3968   3947  0 20:21 pts/3    00:00:00 grep kibana
[root@node6 ~]# ss -tnl | grep 5601
LISTEN     0      128                       *:5601                     *:*   

 六、後臺啓動:git

[root@node6 kibana]# nohup  bin/kibana &
[1] 3975

七、訪問測試:默認監聽端口5601
http://192.168.10.206:5601github

八、配置索引:索引的名稱要和logstash的output生成的索引能進行匹配才能夠
redis

九、查看數據:默認顯示最新的500個文檔

十、數據精確搜索:

十一、搜索高級語法:

status:404 OR status:500  #搜索狀態是404或者是500之一的
status:301 AND status:200  #搜索便是301和200同時匹配的
status:[200 TO 300] :搜索指定範圍的

十二、保存經常使用的搜索語法:


 

二:其餘的經常使用模塊:

一、系統日誌收集---> syslog:配置syslog結果寫入到elasticsearch,指定端口514,主機就是要收集日誌的服務器IP地址,便可使用

二、訪問日誌:nginx轉換成json格式

三、錯誤日誌:使用codec插件:

https://www.elastic.co/guide/en/logstash/1.5/codec-plugins.html
input {
  stdin {
    codec => multiline {  #多行日誌,好比java的日誌
      pattern => "^\s"  #pattern => ".*\t.*"  #找到換行符,會把多行認爲是一行,即會把當前行和上一行合成一行,直到有換行符結束
      what => "previous"
    }
  }
}

四、運行日誌 codec => json,若是不是json要使用grok進行匹配,相對比較麻煩,若是丟日誌就看logstash.log,另外檢查日誌是否有效的json格式:

json效驗地址:http://www.bejson.com/

五、kibana的時區和時間問題:kibana會自動根據瀏覽器將時間加8小時,經過logstash寫入會自動解決,若是經過python腳本等寫入會產生時間問題

六、在地圖顯示IP具體來源地址:

https://www.elastic.co/guide/en/logstash/1.5/filter-plugins.html

七、條件判斷:

input {
  file {
    type => "apache"
    path => "/var/log/apache.log"
  }
  file {
    type => "tomcat"
    path => "/var/log/tomcat.log"
  }
}
filter {
if [type] == "apache" { #假如索引爲apache,就執行如下操做 redis { data_type => "list" key => "system-message-jack" host => "192.168.10.205" port => "6379" db => "0" } if [type] == "tomcat" { #假如索引爲tomcat,就執行一次操做 redis { data_type => "list" key => "system-message-tomcat" host => "192.168.10.205" port => "6379" db => "1" #寫不一樣的數據庫 } }

 nginx 最好設置buffer大小,64k

kibana要添加elastsearch的key

搜索的語法:直接搜索鍵值  a:b  AND ALL NOT進行匹配。範圍 [200-299]

6.測試logstash配置文件語法是否正確:

6.1:配置正確的檢查結果:

[root@elk-server2 conf.d]# /etc/init.d/logstash configtest
Configuration OK

6.2:語法錯誤的顯示結果:

[root@elk-server2 tianqi]# /etc/init.d/logstash configtest
The given configuration is invalid. Reason: Expected one of #, {, } at line 17, column 53 (byte 355) after output {
    if  [type] == "nginx3"  {
        elasticsearch {
                hosts => ["192.168.0.251:9200"]
                index => "logstash-newsmart-nginx3-" {:level=>:fatal}  #會指明語法錯誤的具體地方

 

三:tomcat日誌:

一、tomcat日誌默認不是json格式的,可是logstash分析的時候就沒有key和valus了,因此咱們能夠將tomcat日誌的格式定義爲json的格式:

directory="logs"  prefix="localhost_access_log." suffix=".log"
     pattern="{"client":"%h",  "client user":"%l",   "authenticated":"%u",   "access time":"%t",     "method":"%r",   "status":"%s",  "send bytes":"%b",  "Query?string":"%q",  "partner":"%{Referer}i",  "Agent version":"%{User-Agent}i"}"/>

二、取到的日誌結果爲:

{"client":"180.95.129.206",  "client user":"-",   "authenticated":"-",   "access time":"[20/Apr/2016:03:47:40 +0000]",     "method":"GET /image/android_logo.png HTTP/1.1",   "status":"200",  "send bytes":"1915",  "Query string":"",  "partner":"http://mobile.weathercn.com/index.do?id=101160101&partner=1000001003",  "Agent version":"Mozilla/5.0 (Linux; U; Android 5.1.1; zh-cn; NX510J Build/LMY47V) AppleWebKit/537.36 (KHTML, like Gecko)Version/4.0 Chrome/37.0.0.0 MQQBrowser/6.6 Mobile Safari/537.36"}

三、在線驗證是否合法的json格式:

地址:http://www.bejson.com/,將完整的一行日誌複製到驗證框,而後點驗證便可:結果以下

 

四:nginx 日誌格式處理:

一、編輯nginx.conf配置文件,自定義一個日誌格式:

[root@node5 ~]# vim  /etc/nginx/nginx.conf

二、添加內容以下:

    log_format logstash_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"url":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"agent":"$http_user_agent",'
        '"status":"$status"}';

 三、編輯主機配置:

[root@node5 ~]# grep -v "#"  /etc/nginx/conf.d/locathost.conf  | grep -v "^$" 
server {
    listen       9009; #監聽的端口
    server_name  www.a.com;  #主機名
    
    access_log  /var/log/nginx/json.access.log  logstash_json;  #定義日誌路徑爲/var/log/nginx/json.access.log,並引用在主配置文件nginx.conf中定義的json日誌格式
    include /etc/nginx/default.d/*.conf;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
    error_page  404              /404.html;
    location = /404.html {
        root   /usr/share/nginx/html;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

四、重啓nginx,查看日誌格式是json格式了:

[root@node5 ~]# tail /var/log/nginx/json.access.log 
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.001,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
{"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}

五、在線效驗日誌格式是否正確:

效驗地址:http://www.bejson.com/

 

五:畫圖功能

在地圖顯示IP的訪問次數統計:

一、在elasticsearch服務器用戶家目錄下載一個Filebeat 模板:

cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json #這是一個模板文件

二、加載模板:

[root@elk-server1 ~]# curl -XPUT 'http://192.168.0.251:9200/_template/filebeat?pretty' -d@filebeat-index-template.json  #是elasticsearch監聽的IP地址
{
  "acknowledged" : true  #必定要返回true才表示成功
}

三、下載GeoIP 數據庫文件:

[root@elk-server1 ~]# cd /etc/logstash/
[root@elk-server1 logstash]# curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
[root@elk-server1 logstash]# gunzip GeoLiteCity.dat.gz
[root@elk-server1 logstash]# ls
conf.d  GeoLiteCity.dat  #確認文件存在

四、配置logstash使用GeoIP:

[root@elk-server1 logstash]# vim /etc/logstash/conf.d/11-mobile-tomcat-access.conf  #logstash的文件配置要以.conf結尾

input {
        redis {
                data_type => "list"
                key => "mobile-tomcat-access-log"
                host => "192.168.0.251"
                port => "6379"
                db => "0"
                codec  => "json"
        }
}

#input部分爲從redis讀取客戶端logstash分析提交後的訪問日誌

filter {
        if [type] == "mobile-tomcat" {
        geoip {
                source => "client"  #client 是客戶端logstash收集日誌時定義的公網IP的key名稱,必定要和實際名稱一致,由於要經過此名稱獲取到其對於的ip地址
                target => "geoip"
                database => "/etc/logstash/GeoLiteCity.dat"
                add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
    mutate {
      convert => [ "[geoip][coordinates]", "float"]
        }
    }
}


output { 
        if [type] == "mobile-tomcat" {
        elasticsearch {
                hosts => ["192.168.0.251"]
                manage_template => true
                index => "logstash-mobile-tomcat-access-log-%{+YYYY.MM.dd}" #index的名稱必定要是logstash開頭的,不然會在使用地圖的時候出現geoIP type沒法找找到的相似錯誤
                flush_size => 2000
                idle_flush_time => 10
                }
        }
}

五、在kibana界面添加新的索引,而後visualize---->Tile map---->From a new search---->Select a index patterm--->選擇以前的index---->Geo coordinates,而後點綠色的運行按鈕便可:

相關文章
相關標籤/搜索