ELK安裝

1、基礎軟件安裝html

yum -y localinstall elasticsearch-2.1.1.rpmjava

 chkconfig --add elasticsearchnode

 rpm -ivh jdk-8u111-linux-x64.rpm  (elasticsearch 依賴於jdk1.8以上)linux

[root@rabbitmq-node2 ELK]# java -version              nginx

java version "1.8.0_111"git

Java(TM) SE Runtime Environment (build 1.8.0_111-b14)github

Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)web

配置新的環境變量正則表達式

[root@rabbitmq-node2 profile.d]# cat /etc/profile.d/java.shjson

JAVA_HOME=/usr/java/jdk1.8.0_111

JRE_HOME=/usr/java/jdk1.8.0_111/jre

CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

export JAVA_HOME JRE_HOME CLASS_PATH PATH

修改/etc/elasticsearch/elasticsearch.yml 配置文件

[root@rabbitmq-node2 elasticsearch]# egrep -v "^$|#" elasticsearch.yml

cluster.name: gaoyang   多個機器的集羣名稱須要同樣

node.name: node-1

path.data: /data/es-data   數據目錄要建立,而且要賦值權限給elasticsearch用戶。由於yum安裝的默認是用的elasticsearch用戶啓動服務的

path.logs: /var/log/elasticsearch

bootstrap.mlockall: true    開啓鎖定內存

network.host: 0.0.0.0

http.port: 9200

[root@rabbitmq-node2 elasticsearch]# mkdir -p /data/es-data

chown -R elasticsearch.elasticsearch /data/es-data/

[root@rabbitmq-node2 elasticsearch]# cat /etc/security/limits.conf |grep elasticsearch

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

須要配置/etc/security/limits.conf文件,elasticsearch用戶有權限獨佔內存

service elasticsearch status

service elasticsearch start  啓動elasticsearch

而後查看端口和服務

[root@rabbitmq-node2 elasticsearch]# ss -tnulp|grep 9200

tcp    LISTEN     0      50                    :::9200                 :::*      users:(("java",55424,140))

[root@rabbitmq-node2 elasticsearch]# ps aux |grep elasticsearch

497       55424  5.5  3.5 4682452 583948 ?      SLl  10:58   0:07 /usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.1.1.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch

root      55516  0.0  0.0 105488   956 pts/1    S+   11:00   0:00 grep elasticsearch

經過web頁面訪問,若是能夠出現json格式的字符串,表示elasticsearch安裝成功了。

 

安裝完以後也能夠經過查看日誌來分析elasticsearch啓動是否有問題

[root@rabbitmq-node1 profile.d]# tail -f /var/log/elasticsearch/xx.log

[2017-11-08 11:11:56,935][INFO ][node                     ] [node-2] initialized

[2017-11-08 11:11:56,936][INFO ][node                     ] [node-2] starting ...

[2017-11-08 11:11:57,013][WARN ][common.network           ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {10.83.22.86}

[2017-11-08 11:11:57,014][INFO ][transport                ] [node-2] publish_address {10.83.22.86:9300}, bound_addresses {[::]:9300}

[2017-11-08 11:11:57,022][INFO ][discovery                ] [node-2] gaoyang/1--F-NyXSHi6jMxdnQT-7A

[2017-11-08 11:12:00,061][INFO ][cluster.service          ] [node-2] new_master {node-2}{1--F-NyXSHi6jMxdnQT-7A}{10.83.22.86}{10.83.22.86:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)

[2017-11-08 11:12:00,087][WARN ][common.network           ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {10.83.22.86}

[2017-11-08 11:12:00,087][INFO ][http                     ] [node-2] publish_address {10.83.22.86:9200}, bound_addresses {[::]:9200}

[2017-11-08 11:12:00,087][INFO ][node                     ] [node-2] started

[2017-11-08 11:12:00,121][INFO ][gateway                  ] [node-2] recovered [0] indices into cluster

經過web訪問另一個節點Node-2

 

[root@rabbitmq-node2 elasticsearch]# curl -i -XGET 'http://10.83.22.86:9200/_count?pretty' -d ' {

> "query": {

>     "match_all": {}

> }

> }'

HTTP/1.1 200 OK

Content-Type: application/json; charset=UTF-8

Content-Length: 95

{

  "count" : 0,

  "_shards" : {

    "total" : 0,

    "successful" : 0,

    "failed" : 0

  }

}

[root@rabbitmq-node2 elasticsearch]#

pretty,參數告訴elasticsearch,返回形式打印JSON結果

query:告訴咱們定義查詢
match_all:運行簡單類型查詢指定索引中的全部文檔

http://blog.csdn.net/stark_summer/article/details/48830493

安裝elasticsearch-head插件:

/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

首先建立一個索引,選擇5分片,副本數爲1

而後能夠在這個索引裏面POST數據到裏面

而後能夠經過ID get剛纔post上去的數據 

建立了index以後,黃色表示主的沒有問題,備節點有問題

正常的集羣應該是兩個節點都是綠色的才正確

兩個服務器若是要建立集羣。除了上面所說的要配置同一個集羣名稱之外,還須要配置單播。默認用的是多播的方式。可是多播不成功的話,就須要配置單播 

[root@rabbitmq-node2 elasticsearch]# cat /etc/elasticsearch/elasticsearch.yml |grep discovery

# Pass an initial list of hosts to perform discovery when new node is started:

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: ["10.83.22.85", "10.83.22.86"]

# discovery.zen.minimum_master_nodes: 3

# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>

[root@rabbitmq-node2 elasticsearch]#

把集羣的IP配置到單播的地址裏面,而且在防火牆裏面開通兩個機器的集羣通訊端口9300;注意9200只是訪問端口

安裝elasticsearch的監控插件

/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

Install 後面緊跟着的是github的下載地址,默認會在github下面下載

安裝logstash:

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/logstash-2.1.1-1.noarch.rpm

yum -y localinstall logstash-2.1.1-1.noarch.rpm

/opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'  啓動logstash

而後輸入hello,控制檯就會輸出信息

Ctrc+c 取消掉logstash的運行

而後從新輸入命令:

/opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'

codec:輸出至前臺,方便邊實踐邊測試

#一般使用rubydebug方式前臺輸出展現以及測試

# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch {hosts => ["10.83.22.85:9200"] } stdout{ codec => rubydebug } }'

輸出到elasticsearch,而且輸出到控制檯

同時在elasticsearch也能夠看的到輸出的數據

也能夠經過寫配置文件,而後啓動logstash的時候指定配置文件的方式

[root@SZ33SITSIM00AP0003 software]# cat /etc/profile.d/logstash.sh  把logstash的執行文件寫入到環境變量,下次執行命令就不須要寫絕對路徑了

LOGSTASH_HOME=/opt/logstash/bin

export PATH=$LOGSTASH_HOME:$PATH

[root@SZ33SITSIM00AP0003 software]# source /etc/profile

[root@SZ33SITSIM00AP0003 software]# logstash -f /confs/logstash-simple.conf

}

[root@SZ33SITSIM00AP0003 ~]# cat /confs/logstash-simple.conf

input {

    stdin { } 

}

output {

    elasticsearch { hosts => ["10.83.22.85:9200"] }

    stdout { codec => rubydebug }

}

[root@SZ33SITSIM00AP0003 ~]#

如今要把系統日誌文件/var/log/message  還有nginx的訪問日誌文件access.log 放到elasticsearch裏面查詢的配置

input {

    file {

       path => "/var/log/messages"

       type => "syslog"

       start_position => "beginning" #表示從文件的開頭開始

   }

    file {

       path => "/usr/local/nginx/logs/access.log"

       type => "nginx"

       codec => "json"

       start_position => "beginning"

    }

}

output {

    if[type] == "syslog" {   #根據文件的類型建立不一樣的索引

    elasticsearch {

          hosts => ["10.83.22.85:9200"]

          index => ['syslog-%{+YYYY-MM-dd}']

          workers => 5   #指定多線程

        } 

   } 

    if[type] == "nginx" {   

    elasticsearch {

          hosts => ["10.83.22.85:9200"]

          index => ['nginx-%{+YYYY-MM-dd}']

          workers => 5

       }

  } 

}

logstash -f /confs/logstash-simple.conf  啓動logstash

安裝kibana:

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/kibana-4.3.1-linux-x64.tar.gz

tar xzvf kibana-4.3.1-linux-x64.tar.gz

mv kibana-4.3.1-linux-x64 /usr/local/

ln -sv kibana-4.3.1-linux-x64/ kibana

vim /usr/local/kibana/config/kibana.yml

server.port: 5601

server.host: "0.0.0.0"

server.basePath: ""

elasticsearch.url: "http://10.83.22.85:9200"

kibana.index: ".kibana"

screen -S kibana

/usr/local/kibana/bin/kibana &

Ctrl+a+d

Screen -ls

放在後臺開啓

ab -n 1000  -c 20 http://10.83.36.35:80/ 模擬用戶訪問瀏覽器

注意這個ab命令 後面的網址是http://ip:端口/路徑的格式

Ab命令默認系統是沒有安裝的,須要安裝的方法是:

yum install yum-utils

cd /opt

mkdir abtmp

cd abtmp

yum install yum-utils.noarch

yumdownloader httpd-tools*

rpm2cpio httpd-*.rpm | cpio -idmv

修改nginx的配置文件(主要是修改日誌的格式)

    log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sents":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';

在server段裏面調用日誌

access_log  logs/host.access.log  access_log_json;

而後設置logstash的配置文件

[root@SZ33SITSIM00AP0003 ~]# cat /confs/logstash-nginx.conf

input{

    file {

         path => "/usr/local/nginx/logs/host.access.log"

         codec => "json"

         type => "nginx-json"

         start_position => "beginning"

         }

}

filter{

}

output{

     elasticsearch {

            hosts => ["10.83.22.85:9200"]

            index => "nginx-json-%{+YYYY-MM-dd}"

                   }

}  

而後在另一臺機器模擬訪問

ab -n 1000  -c 20 http://10.83.36.35:80/

最終在elasticsearch看到的效果就是

能夠看到分域顯示的

# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g

ES_HEAP_SIZE=4g

配置 JVM內存

vim /etc/sysconfig/elasticsearch

ES_HEAP_SIZE=4g

這臺機器的可用內存爲8G

Filebeat的安裝和配置

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/filebeat-5.0.1-x86_64.rpm

Rpm -ivh filebeat-5.0.1-x86_64.rpm

配置文件:/etc/filebeat/filebeat.yml

filebeat.prospectors:

- input_type: log

  paths:

    - /home/weblogic/scm_server/logs/logger.log  定義日誌文件的路徑

  encoding: plain          定義日誌的編碼是UTF-8

  document_type: scm_server_msg    定義分類,在logstash服務器上面能夠經過type字段引用這個類型

- input_type: log

  paths:

    - /home/weblogic/scm_server/logs/logger_error.log

  encoding: plain

  document_type: scm_server_error

  tail_files: false              表示從文件頭部開始獲取日誌,默認是true,就是從文件的結尾開始獲取日誌,若是此選項沒有做用的話,可使用另一種方法,就是刪除記錄日誌讀取位置的文件  rm -rf   /var/lib/filebeat/registry

  multiline:                     這一段的配置主要是針對tomcat報錯的多行合併

pattern: '^\#\#\#\s'      這一塊是正則表達式,由於scm_server的error日誌,每一行都是以###開始的,因此用正則表達式來表示以###開頭緊接着是空格的  \s表示空格      

      negate: true    符合上面的正則表達式

      match: after    向下匹配成一行

timeout: 10s    定義超時時間,若是開始一個新的事件在超時時間內沒有發現匹配,也將發送日誌,默認是5s

- input_type: log

  paths:

    - /home/weblogic/bla_server/logs/logger_error.log

  encoding: plain

  document_type: bla_server_error

  tail_files: false

  multiline:

      pattern: '^\['    這一塊是正則表達式,由於bla_server的error日誌,報錯每一行都是以[開始的,因此用正則表達式來表示以[開頭緊接着是空格的

      negate: true

      match: after

      timeout: 10s

- input_type: log

  paths:

    - /home/weblogic/bla_server/logs/logger.log

  encoding: plain

  document_type: bla_server_msg

processors:

- drop_fields:

            fields: ["input_type", "beat", "offset", "source","tags","@timestamp"]

fields:

     ip_address: 172.16.8.11   在logstash裏面定義的的變量內容

     host: 172.16.8.11

fields_under_root: true

output.logstash:                 將filebeat抓取的日誌輸出到logstash

   hosts: ["10.83.22.118:5044"]

 

 

 

Logstash配置:logstash是自定義配置文件的

 

[root@SZ3FUATIMS00AP0001 ~]# cat /confs/logstash/conf.d/filebeat.conf

input {         #這塊是定義logstash的端口,filebeat服務器裏面的output寫了這個端口

         beats {

                port => 5044

     }

}

output {

     if [type] == "scm_server_msg" {     #這個地方就是根據filebeat裏面的document_type定義的類型來設置的,經過if來實現不一樣的日誌文件,輸出到elasticsearch裏面爲不一樣的索引

       elasticsearch {

                   hosts => [ "10.83.22.118:9200" ]    #定義輸出到elasticsearch,端口是9200

                   index => "scm_server_msg-%{+YYYY.MM.dd}"   #定義elasticsearch裏面的index的名稱

         }

  }

     if [type] == "scm_server_error" { 

       elasticsearch {

                   hosts => [ "10.83.22.118:9200" ]

                   index => "scm_server_error-%{+YYYY.MM.dd}"

         }

    }

     

     if [type] == "bla_server_error" {   

       elasticsearch {

                   hosts => [ "10.83.22.118:9200" ]

                   index => "bla_server_error-%{+YYYY.MM.dd}"

         }

    }

     if [type] == "bla_server_msg" {    

       elasticsearch {

                   hosts => [ "10.83.22.118:9200" ]

                   index => "bla_server_msg-%{+YYYY.MM.dd}"

         }

   }

#這一塊的配置主要是郵件報警,經過if判斷type的名稱而且日誌message字段就是消息主體裏面包含了ERROR的內容就觸發email插件來實現報警

     if [type] =~ /bla_server_error|scm_server_error/ and [message] =~ /ERROR/ {

        email {

            port           =>    25

            address        =>    "smtp.163.com"

            username       =>    "xxxx"

            password       =>    "xxx"

            authentication =>    "plain"

            from           =>    "18688791025@163.com"

            codec          =>    "plain"  這裏是指定日誌的編碼UTF-8

            contenttype    =>    "text/html; charset=UTF-8"

            subject        =>    "%{type}:應用錯誤日誌!%{host}"  這裏是郵件的標題,裏面用到了變量,分別爲type和主機ip

            to             =>    "xx.xx@xx.x"

            cc   #抄送給誰          =>    "xxx@xx"

            via            =>    "smtp"

            body           =>    "%{message}"  #郵件的內容爲message的內容

       }

   }

}

相關文章
相關標籤/搜索