ELK日誌分析平臺搭建全過程

1、環境css

系統:centos 6.5html

JDK:1.8java

Elasticsearch-5.2.2linux

Logstash-5.2.2nginx

kibana-5.2.2git

2、安裝redis

一、安裝JDKbootstrap

下載JDK:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmlvim

本環境下載的是64位tar.gz包,將安裝包拷貝至安裝服務器/usr/local目錄centos

[root@server~]# cd /usr/local/ 
[root@localhost local]# tar -xzvf jdk-8u111-linux-x64.tar.gz

配置環境變量

[root@localhost local]# vim /etc/profile

將下面的內容添加至文件末尾(假如服務器須要多個JDK版本,爲了ELK不影響其它系統,也能夠將環境變量的內容稍後添加到ELK的啓動腳本中)

JAVA_HOME=/usr/local/jdk1.8.0_111
JRE_HOME=/usr/local/jdk1.8.0_111/jre
CLASSPATH=.:$JAVA_HOME/lib:/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$PATH:$JAVA_HOME/bin
export  JAVA_HOME
export  JRE_HOME

ulimit -u 4096

[root@server local]# source /etc/profile

配置limit相關參數

[root@server local]# vim /etc/security/limits.conf
添加如下內容

* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536

[root@server local]# vim /etc/sysctl.conf

vm.max_map_count=262144

[root@server local]# sysctl –p  (y)

建立運行ELK的用戶

[root@server local]# groupadd elasticsearch

[root@server local]# useradd -g elasticsearch elasticsearch

[root@server local]# mkdir /opt/elk

[root@server local]# chown elasticsearch. /opt/elk -R

二、安裝ELK

如下由elasticsearch用戶操做

以elasticsearch用戶登陸服務器

[root@server ~]# su  - elasticsearch

下載ELK安裝包:https://www.elastic.co/downloads,並上傳到服務器且解壓,解壓命令:tar -xzvf 包名

kibana5.2.2 下載地址:https://artifacts.elastic.co/downloads/kibana/kibana-5.2.2-linux-x86_64.tar.gz

elasticsearch 下載地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.tar.gz

logstash下載地址:https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz

配置Elasticsearch(單節點ES)

 [root@server elk]# cd elasticsearch-5.2.2

[root@server elasticsearch-5.2.2]# vim config/elasticsearch.yml 

添加:

network.host: 192.168.1.124
http.port: 9200
bootstrap.system_call_filter: false
http.cors.enabled: true
http.cors.allow-origin: "*"
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,logstash*

  

elasticsearch 默認啓動會使用2g內存,若是本身測試需修改配置文件

 [root@server elasticsearch-5.2.2]# vim config/jvm.options 

-Xms2g --> 256m
-Xmx2g --> 256m

啓動 elasticsearch

[root@server elasticsearch-5.2.2]# ./bin/elasticsearch -d

 

注: 啓動完成後想看WEB端網頁能夠參考 如下網站安裝head插件

http://www.cnblogs.com/valor-xh/p/6293485.html?utm_source=itdadao&utm_medium=referral

 

安裝logstash

[root@server logstash-5.2.2]# pwd
/opt/elk/logstash-5.2.2

[root@server logstash-5.2.2]# mkdir config.d  

[root@server config.d]# vim nginx_accss.conf

input {
    file {
        path => [ "/usr/local/nginx/logs/access.log" ]
        start_position => "beginning"
        ignore_older => 0
        type => "nginx-access"
    }
}

filter {
    if [type] == "nginx-access" {
    grok {
        match => [
            "message","%{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}"
        ]
    }

   urldecode {
       all_fields => true
   }

   date {
        locale => "en"
        match => ["timestamp" , "dd/MMM/YYYY:HH:mm:ss Z"]

   }
    geoip {
        source => "clientip"
        target => "geoip"
        database => "/opt/elk/logstash-5.2.2/data/GeoLite2-City.mmdb"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    }
    mutate {
        convert => [ "[geoip][coordinates]", "float" ]
        convert => [ "response","integer" ]
        convert => [ "bytes","integer" ]
        replace => { "type" => "nginx_access" }
        remove_field => "message"
   }
   }


}
output {
    elasticsearch {
        hosts => ["192.168.1.124:9200"]
        index => "logstash-nginx-access-%{+YYYY.MM.dd}"
    }
    stdout {codec => rubydebug}
}

簡單說明:

logstash的配置文件須包含三個內容:

input{}:此模塊是負責收集日誌,能夠從文件讀取、從redis /kafka讀取或者開啓端口讓產生日誌的業務系統直接寫入到logstash

filter{}:此模塊是負責過濾收集到的日誌,並根據過濾後對日誌定義顯示字段

output{}:此模塊是負責將過濾後的日誌輸出到elasticsearch或者文件、redis等

 

logstash配置及說明: ELK系統分析Nginx日誌並對數據進行可視化展現

 

測試配置文件是否有問題:

/opt/elk/logstash-5.2.2/bin/logstash -t -f /opt/elk/logstash-5.2.2/config.d/nginx_accss.conf

啓動 logstash:

nohup /opt/elk/logstash-5.2.2/bin/logstash -f /opt/elk/logstash-5.2.2/config.d/nginx_accss.conf &

查看是否啓動成功

tail -f nohup.out 

出現以上內容表示啓動成功

安裝kibana

保存退出

啓動kibana

nohup ./bin/kibana &

 

其中 logstash-nginx-access- 從來的

表明實時收集的日誌條數

相關文章
相關標籤/搜索