ELK平臺搭建(下)

1. 目的

指導在Centos6.8系統下搭建標準ELK平臺的工做,特編寫本施工文檔。java

 2. 定義node

Elasticsearch Logstash Kibana結合Redis協同工做。mysql

3. 適用範圍

適用於運營維護運維工程師,針對系統Centos6.8下搭建標準ELK平臺的工做linux

 4. 環境nginx

 

 

Elasticsearchc++

Elasticsearch+filebeatgit

filebeatredis

filebeatsql

Elasticsearch++Redis+Kibana+filebeat+logstash_serverjson

操做系統

CentOS 6.8 x64

CentOS 6.8 x64

CentOS 6.8 x64

CentOS 6.8 x64

CentOS 6.8 x64

CPU/內存

8cpu/32G

8cpu/32G

8cpu/16G

8cpu/16G

8cpu/64G

外網IP

 

 

 

 

115.182.45.39

內網IP

192.168.0.15

192.168.0.16

192.168.0.17

192.168.0.18

192.168.0.19

Elasticsearch版本

elasticsearch-5.4.1.tar.gz

 

 

 

 

Logstash版本

logstash-5.4.1.tar.gz

 

 

 

 

Kibana版本

kibana-5.4.1-linux-x86_64.tar.gz

 

 

 

 

Redis版本

redis-3.0.7

 

 

 

 

JDK版本

jdk-8u144-linux-x64

 

 

 

 

 

5. 施工前準備

準備以下安裝包:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.1.tar.gz

wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.1-linux-x86_64.tar.gz

wget https://artifacts.elastic.co/downloads/logstash/logstash-5.4.1.tar.gz

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.1-linux-x86_64.tar.gz

http://download.redis.io/releases/redis-3.0.7.tar.gz

 

6. 施工過程

6.1 安裝java環境

在五臺服務器上分別安裝jdk-8u144-linux-x64

#mkdir -p /usr/java

#tar -xzvf jdk-8u144-linux-x64.tar.gz -C /usr/java

#vim /etc/profile

增添如下內容:

export JAVA_HOME=/usr/java/jdk1.8.0_144

export JRE_HOME=${JAVA_HOME}/jre

export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export PATH=${JAVA_HOME}/bin:$PATH

# source /etc/profile

# java -version

java version "1.8.0_144"

Java(TM) SE Runtime Environment (build 1.8.0_144-b01)

Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

 

6.2 安裝logstash

6.2.1 解壓logstash源碼包

192.168.0.19服務器上解壓logstash

# tar -xzvf logstash-2.3.4.tar.gz -C /data/soft/

6.2.1 建立config

建立config目錄

# mkdir -p /data/soft/logstash-5.4.1/conf/

 

6.3 安裝 redis

6.2.1  192.168.0.19上安裝redis

#yum install  gcc gcc-c++ -y   #安裝過的,就不須要再安裝了

#wget http://download.redis.io/releases/redis-3.0.7.tar.gz

#tar xf redis-3.0.7.tar.gz

#cd redis-3.0.7

#make 

#mkdir -p /usr/local/redis/{conf,bin}

#cp ./*.conf /usr/local/redis/conf/

#cp runtest* /usr/local/redis/

#cd utils/

#cp mkrelease.sh   /usr/local/redis/bin/

#cd ../src

#cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server redis-trib.rb /usr/local/redis/bin/

建立redis數據存儲目錄

#mkdir -pv /data/redis/db

#mkdir -pv /data/log/redis

 

6.2.2 修改redis配置文件

#cd /usr/local/redis/conf

#vi redis.conf

dir ./  修改成dir /data/redis/db/

保存退出

Redis在性能上的優化:

#vim /usr/local/redis/conf/redis.conf

 maxmemory  6291456000 

6.2.3 啓動redis

#nohup /usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf &

6.2.4 查看redis進程

#ps -ef | grep redis

root      4425  1149  0 16:21 pts/0    00:00:00 /usr/local/redis/bin/redis-server *:6379                          

root      4435  1149  0 16:22 pts/0    00:00:00 grep redis

#netstat -tunlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1402/sshd           

tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1103/master         

tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      4425/redis-server * 

tcp        0      0 :::22                       :::*                        LISTEN      1402/sshd           

tcp        0      0 ::1:25                      :::*                        LISTEN      1103/master         

tcp        0      0 :::6379                     :::*                        LISTEN      4425/redis-server *

 

6.3配置使用filebeat

6.3.1 192.168.0.16上收集/data/soft/tomcat-user/logs/catalina.out日誌

 

# vim /newnet.bak/elk/filebeat-5.4.1-linux-x86_64/filebeat.yml (修改)

 - input_type: log

  paths:

    - /data/soft/tomcat-user/logs/catalina.out

  tags: ["tomcat-user"]

output.redis:

 

  # Array of hosts to connect to.

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-user-log"

  db: 7

  timeout: 5

 template.enabled: true

   template.path: "filebeat.template.json"

   template.overwrite: false

 

此配置文件意思是要將該路徑下產生的日誌輸送到redis當中,其中type key是本身定義的類型,方便識別。

 

6.3.2 啓動filebeat

 nohup /newnet.bak/elk/filebeat-5.4.1-linux-x86_64/filebeat -c filebeat.yml  &

 #ps -ef |grep filebeat  #檢驗啓動成功

 

6.4安裝elasticsearch

 在三臺服務器 192.168.0.15192.168.0.16192.168.0.19上分別安裝elasticsearch。例如在192.168.0.15上安裝elasticsearch

 6.4.1  添加elasticsearch用戶,由於Elasticsearch服務器啓動的時候,須要在普通用戶權限下來啓動。

 #adduser elasticsearch

 #passwd elasticsearch  

 # tar -xzvf  elasticsearch-5.4.1.tar.gz  -C /home/elasticsearch/

 

直接將包拷貝至/home/elasticsearch時候沒法修改配置文件,所以須要修改權限;

 #chmod 777 /home/elasticsearch -R

 #chown elasticsearch.elasticsearch /home/elasticsearch -R

 #su - elasticsearch

 

6.4.2 修改elasticsearch 配置文件

 # cd   elasticsearch-5.4.1

 #mkdir {data,logs}

 # cd  config

#vim  elasticsearch.yml

cluster.name: serverlog      #集羣名稱,能夠自定義

node.name: node-1         #節點名稱,也能夠自定義

path.data: /home/elasticsearch/elasticsearch-5.4.1/data        #data存儲路徑

path.logs: /home/elasticsearch/elasticsearch-5.4.1/logs        #log存儲路徑

network.host: 192.168.0.15             #節點ip

http.port: 9200             #節點端口

discovery.zen.ping.unicast.hosts: ["192.168.0.16","192.168.0.19"]  #集羣ip列表

discovery.zen.minimum_master_nodes: 3                            #集羣節點

6.4.3 啓動服務

#cd elasticsearch-5.4.1

#./bin/elasticsearch -d

查看進程

出現的常見錯誤分析:

錯誤1:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

解決:打開/etc/security/limits.conf文件,添加如下兩行代碼並保存:

* soft nofile 65536     //*表示任意用戶,這裏是elasticsearch報的錯,也能夠直接填運行elasticsearch的用戶;

* hard nofile 131072

 

錯誤2:memory locking requested for elasticsearch process but memory is not locked

解決:修改elasticsearch.yml文件

bootstrap.memory_lock : false

錯誤3:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解決:修改內核配置

#sysctl -w vm.max_map_count=262144

# sysctl -p

 

錯誤4:os::commit_memory(0x00000001006cd000, 77824, 0) failed; error=’Cannot allocate memory’ (errno=12)

解決:提供內存不足,增大主機內存或減少elasticsearch的內存大小

# vim /newnet.bak/elk/elasticsearch-5.4.1/config/jvm.options

-Xms4g

-Xmx4g

 

#ps -ef | grep elasticsearch

 查看端口

 #netstat -tunlp

 (Not all processes could be identified, non-owned process info

 will not be shown, you would have to be root to see it all.)

 Active Internet connections (only servers)

 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

 tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      -                   

 tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      -                   

 tcp        0      0 ::ffff:10.0.18.148:9300     :::*                        LISTEN      1592/java           

 tcp        0      0 :::22                       :::*                        LISTEN      -                   

 

tcp        0      0 ::1:25                      :::*                        LISTEN      -                   

 tcp        0      0 ::ffff:10.0.18.148:9200     :::*                        LISTEN

 

啓動連個端口:9200集羣之間事務通訊,9300集羣之間選舉通訊其他兩臺elasticsearch服務根據這臺服務的配置文件作相應調整。

 

6.4.4 等待三臺elasticsearch作好後,查看集羣健康信息

 # curl -XGET 'http://192.168.0.19:9200/_cluster/health?pretty'

{

  "cluster_name" : "serverlog",

  "status" : "green",

  "timed_out" : false,

  "number_of_nodes" : 3,

  "number_of_data_nodes" : 3,

  "active_primary_shards" : 105,

  "active_shards" : 210,

  "relocating_shards" : 0,

  "initializing_shards" : 0,

  "unassigned_shards" : 0,

  "delayed_unassigned_shards" : 0,

  "number_of_pending_tasks" : 0,

  "number_of_in_flight_fetch" : 0,

  "task_max_waiting_in_queue_millis" : 0,

  "active_shards_percent_as_number" : 100.0

}

 

 

6.4.5 查看節點數

# curl -XGET 'http://192.168.0.15:9200/_cat/nodes?v'

host         ip           heap.percent ram.percent load node.role master name

192.168.0.15 192.168.0.15           27          28 0.10 d         *      node-1

192.168.0.19 192.168.0.19           20         100 6.03 d         m      node-3

192.168.0.17 192.168.0.17           30          87 0.17 d         m      node-2

注意:*表示當前master節點

6.4.6  查看節點分片的信息

#curl -XGET 'http://192.168.0.19:9200/_cat/indices?v'

health status index pri rep docs.count docs.deleted store.size pri.store.size

 

6.4.7 在三臺Elasticsearch節點上安裝x-pack插件,以下:

x-pack是elasticsearch的一個擴展包,將安全,警告,監視,圖形和報告功能捆綁在一個易於安裝的軟件包中,雖然x-pack被設計爲一個無縫的工做,可是你能夠輕鬆的啓用或者關閉一些功能。

#su - elasticsearch

#cd elasticsearch-5.4.1

#./bin/plugin install       x-pack           #x-pack插件

注意:若是隻有一臺外網的話能夠如今這臺上面下載好這個插件,而後拷貝到其他兩臺,可是x-pack不能拷貝到kibana,不同的環境。

 

6.5 配置使用logstash_server

6.5.1 192.168.0.19

#cd  /data/soft/logstash-5.4.1/conf

# mkdir -p  {16..19}.config

#cd 16.config

#vim  logstash_server.conf

input {

    redis {

 port => "6379"

        host => "192.168.0.19"

        db => "7"

        data_type => "list"

        key => "tomcat-user-log"

   }

output {

if "nginx-access" in [tags] {

        elasticsearch {

                codec => "json"

                hosts =>  ["192.168.0.15:9200","192.168.0.16:9200","192.168.0.19:9200"]

                user => "elastic"

                password => "changeme"

                manage_template => true

                index => "192.168.0.19-nginx-access-log-%{+YYYY.MM.dd}"

                }

        }

}

 

  裏面的keytype filebeat 中的配置要相互匹配。

在性能上爲了優化調優參數:

#vim /newnet.bak/elk/logstash-5.4.1/config/logstash.yml

pipeline.batch.size: 10000

pipeline.workers: 8

pipeline.output.workers: 8

pipeline.batch.delay: 10

 

6.5.2 啓動logstash_server

#/data/soft/logstash-5.4.1/bin/logstash  -f  /data/soft/logstash-5.4.1/

conf/16.config/logstash_server.conf  --configtest

#/data/soft/logstash-5.4.1/bin/logstash  -f  /data/soft/logstash-5.4.1/

conf/16.config/logstash_server.conf 

# ps -ef  |grep logstash

檢測logstash是否啓動;

6.6 安裝配置kibana

6.6.1 192.168.0.19上安裝kibana

 # tar -xzvf kibana-5.4.1-linux-x86_64.tar.gz -C /data/soft/

6.6.2 安裝插件

#cd /data/soft/kibana-4.6.1-linux-x86_64/bin/kibana

#./kibana plugin --install x-pack

6.6.3 修改kibana配置文件

#cd  /data/soft/kibana-4.6.1-linux-x86_64/config

#vim  kibana.yml 

server.port: 5601

server.host: "0.0.0.0"

elasticsearch.username: "elastic"

elasticsearch.password: "changeme"

elasticsearch.url: "http://192.168.0.19:9200"

6.6.4 啓動kibana

# nohup  bin/kibana  &

而後命令行輸入exit

# netstat -anput |grep 5601

 

4.施工後校驗

7.1 在瀏覽器訪問kibana端口並建立index,以下:

http://115.182.45.39:5601

 

 

紅方框中的索引名稱是我在logstash server 服務器的配置文件中配置的index名稱

點擊綠色按鈕「Create」,就能夠建立成功了!而後查看kibana界面的「Discovery」,

 能夠看到已經蒐集到日誌數據了!

7.2 查看node,查看集羣是否一致

 

 上圖中也標記了node-1master節點(有星星標記),上圖顯示的數據是不斷刷新的上面提到了查看節點分片的信息

 

 7.3 查看吞吐量

 

 

7.6 節點分片信息相關的問題

 

在本次實驗的過程當中,第一次查看分片信息是沒有的,由於沒有建立索引,後面等建立過索引以後,就能夠看到建立的索引信息了,可是還有集羣的信息沒有顯示出來,問題應該和第2個同樣,Elasticsearch有問題,重啓以後,就查看到了以下:

# curl -XGET '192.168.0.19:9200/_cat/indices?v'

health status index                                     pri rep docs.count docs.deleted store.size pri.store.size

green  open   192.168.0.18-tomcat-send-log-2017.12.20     5   1     650297            0    165.4mb         82.6mb

green  open   192.168.0.16-log-2017.12.20                 5   1       3800            0      1.4mb          762kb

green  open   192.168.0.17-tomcat-order-log-2017.12.20    5   1     905074            0    274.9mb        137.4mb

green  open   192.168.0.17-tomcat-order-log-2017.12.21    5   1       7169            0      2.5mb          1.2mb

green  open   192.168.0.19-nginx-log-2017.12.20           5   1     525427            0    231.4mb        115.6mb

green  open   192.168.0.19-nginx-log-2017.12.21           5   1        315            0    421.6kb        207.2kb

 

7.7 關於建立多個index索引名稱,存儲不一樣類型日誌的狀況

也許咱們不止tomcat-send這一種日誌須要蒐集分析,還有httpdnginxmysql等日誌,可是若是都蒐集在一個索引下面,會很亂,不易於排查問題,若是每一種類型的日誌都建立一個索引,這樣分類建立索引,會比較直觀,實現是在logstash server 服務器上建立配置文件,而後啓動,以下:

input {

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "5"

        data_type => "list"

        key => "tomcat-send-log"

   }

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "9"

        data_type => "list"

        key => "nginx-access-log"

   }

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "7"

        data_type => "list"

        key => "tomcat-user-log"

   }

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "8"

        data_type => "list"

        key => "tomcat-order-log"

   }

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "7"

        data_type => "list"

        key => "tomcat-app-log"

   }

    redis {

        port => "6379"

        host => "192.168.0.19"

        db => "7"

        data_type => "list"

        key => "tomcat-orderr-log"

   }

}

 

filter {

         if "nginx-access" in [tags] {

         geoip {

                source => "remote_addr"

                target => "geoip"

                database =>"/data/soft/logstash-5.4.1/etc/GeoLite2-City .mmdb"

                add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]

                add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]

                }

                mutate {

                          convert => [ "[geoip][coordinates]", "float"]

                }

        }

}

 

output {

if "nginx-access" in [tags] {

        elasticsearch {

                codec => "json"

                hosts =>  ["192.168.0.15:9200","192.168.0.16:9200","192.168.0.19:9200"]

                user => "elastic"

                password => "changeme"

                manage_template => true

                index => "192.168.0.19-nginx-access-log-%{+YYYY.MM.dd}"

                }

        }

 

else if "tomcat-send" in [tags] {

        elasticsearch {

                codec => "json"

                hosts =>  ["192.168.0.15:9200","192.168.0.16:9200","192.168.0.19:9200"]

                user => "elastic"

                password => "changeme"

                manage_template => true

                index => "192.168.0.16-tomcat-send-log-%{+YYYY.MM.dd}"

                }

        }

else if "tomcat-user" in [tags] {

        elasticsearch {

                codec => "json"

                hosts =>  ["192.168.0.15:9200","192.168.0.16:9200","192.168.0.19:9200"]

                user => "elastic"

                password => "changeme"

                manage_template => true

                index => "192.168.0.17-tomcat-user-log-%{+YYYY.MM.dd}"

                }

}

 

再對應的日誌服務器(稱爲客戶端)自己配置conf文件,以下:

- input_type: log

 

  # Paths that should be crawled and fetched. Glob based paths.

  paths:

    - /data/soft/tomcat-user/logs/catalina.out

  tags: ["tomcat-user"]

 

- input_type: log

  paths:

    - /data/soft/tomcat-app/logs/catalina.out

  tags: ["tomcat-app"]

 

- input_type: log

  paths:

    - /data/soft/tomcat-order/logs/catalina.out

  tags: ["tomcat-orderr"]

- input_type: log

  paths:

    - /data/soft/tomcat-sent/logs/catalina.out

  tags: ["tomcat-sent"]

- input_type: log

  paths:

    - /data/soft/tomcat-save/logs/catalina.out

  tags: ["tomcat-save"]

 

 

output.redis:

  # Array of hosts to connect to.

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-user-log"

  db: 7

  timeout: 5

 

output.redis:

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-app-log"

  db: 7

  timeout: 5

 

output.redis:

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-orderr-log"

  db: 7

  timeout: 5

output.redis:

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-sent-log"

  db: 7

  timeout: 5

output.redis:

  hosts: ["192.168.0.19:6379"]

  key: "tomcat-save-log"

  db: 7

  timeout: 5

# Optional protocol and basic auth credentials.

  #protocol: "https"

  #username: "elastic"

  #password: "changeme"

  template.enabled: true

  template.path: "filebeat.template.json"

  template.overwrite: false

 

 

8. 參考文獻

主要參考兩篇技術博客:

   http://blog.51cto.com/467754239/1700828

   http://blog.51cto.com/linuxg/1843114

相關文章
相關標籤/搜索