elk日誌分析平臺安裝

ELK安裝

 

前言html

 

什麼是ELK?

 

  通俗來說,ELK是由Elasticsearch、Logstash、Kibana 三個開源軟件的組成的一個組合體,這三個軟件當中,每一個軟件用於完成不一樣的功能,ELK 又稱爲ELK stack,官方域名爲stactic.co,ELK stack的主要優勢有以下幾個:前端

 

處理方式靈活: elasticsearch是實時全文索引,具備強大的搜索功能java

 

配置相對簡單:elasticsearch所有使用JSON 接口,logstash使用模塊配置,kibana的配置文件部分更簡單。node

 

檢索性能高效:基於優秀的設計,雖然每次查詢都是實時,可是也能夠達到百億級數據的查詢秒級響應。python

 

集羣線性擴展:elasticsearch和logstash均可以靈活線性擴展mysql

 

前端操做絢麗:kibana的前端設計比較絢麗,並且操做簡單linux

 

 

 

什麼是Elasticsearch:

 

是一個高度可擴展的開源全文搜索和分析引擎,它可實現數據的實時全文搜索搜索、支持分佈式可實現高可用、提供API接口,能夠處理大規模日誌數據,好比Nginx、Tomcat、系統日誌等功能。nginx

 

 

 

 

 

什麼是Logstash

 

能夠經過插件實現日誌收集和轉發,支持日誌過濾,支持普通log、自定義json格式的日誌解析。c++

 

 

 

什麼是kibana:

 

主要是經過接口調用elasticsearch的數據,並進行前端數據可視化的展示。git

 

 

 

一:elasticsearch部署:

 

1.1:環境初始化:

 

最小化安裝 Centos 7.2 x86_64操做系統的虛擬機,vcpu 2,內存4G或更多,操做系統盤50G,主機名設置規則爲linux-hostX.exmaple.com,其中host1和host2爲elasticsearch服務器,爲保證效果特額外添加一塊單獨的數據磁盤大小爲50G並格式化掛載到/data。

 

 

 

1.1.1:主機名和磁盤掛載:

 

[root@localhost ~]# hostnamectl  set-hostname linux-hostx.exmaple.com && reboot #各服務器配置本身的主機名並重啓

 

[root@localhost ~]# hostnamectl  set-hostname linux-host2.exmaple.com && reboot

 

[root@linux-host1 ~]# mkdir  /elk

 

[root@linux-host1 ~]# mount /dev/sdb  /elk/

 

[root@linux-host1 ~]# echo  " /dev/sdb /elk/  xfs  defaults    0 0" >> /etc/fstab

 

hostX 。。。。。

 

 

 

1.1.2:防火牆和selinux:

 

關閉防全部服務器的火牆和selinux,包括web服務器、redis和logstash服務器的防火牆和selinux所有關閉,此步驟是爲了不出現由於防火牆策略或selinux安全權限引發的各類未知問題,如下只顯示了host1和host2的命令,可是其餘服務器都要執行。

 

[root@linux-host1 ~]# systemctl  disable  firewalld

 

[root@linux-host1 ~]# systemctl  disable  NetworkManager

 

[root@linux-host1 ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

 

[root@linux-host1 ~]# echo "* soft nofile 65536" >> /etc/security/limits.conf

 

[root@linux-host1 ~]# echo "* hard nofile 65536" >> /etc/security/limits.conf

 

hostX 。。。。。。

 

 

 

1.1.3:各服務器配置本地域名解析:

 

[root@linux-host1 ~]# vim /etc/hosts

 

192.168.56.11 linux-host1.exmaple.com

 

192.168.56.12 linux-host2.exmaple.com

 

192.168.56.13 linux-host3.exmaple.com

 

192.168.56.14 linux-host4.exmaple.com

 

192.168.56.15 linux-host5.exmaple.com

 

192.168.56.16 linux-host6.exmaple.com

 

 

 

1.1.4:設置epel源、安裝基本操做命令並同步時間:

 

[root@linux-host1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

 

[root@linux-host1 ~]# yum install -y net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate

 

[root@linux-host1 ~]# cp /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime

 

[root@linux-host1 ~]# echo "*/5 * * * *  ntpdate time1.aliyun.com &> /dev/null && hwclock -w" >> /var/spool/cron/root

 

[root@linux-host1 ~]# systemctl  restart crond

 

[root@linux-host1 ~]# reboot  #重啓檢查各項配置是否生效,沒有問題的話給虛擬機作快照以方便後期還原

 

 

 

1.2:在host1和host2分別安裝elasticsearch:

 

1.2.1:在兩臺服務器準備java環境:

 

  由於elasticsearch服務運行須要java環境,所以兩臺elasticsearch服務器須要安裝java環境,可使用如下方式安裝:

 

方式一:直接使用yum安裝openjdk

 

[root@linux-host1 ~]# yum install  java-1.8.0*

 

方式二:本地安裝在oracle官網下載rpm安裝包:

 

[root@linux-host1 ~]# yum  localinstall jdk-8u92-linux-x64.rpm

 

方式三:下載二進制包自定義profile環境變量:

 

下載地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

 

 

 

[root@linux-host1 ~]# tar xvf jdk-8u121-linux-x64.tar.gz  -C /usr/local/

 

[root@linux-host1 ~]# ln -sv /usr/local/jdk1.8.0_121 /usr/local/jdk

 

[root@linux-host1 ~]# vim /etc/profile

 

export JAVA_HOME=/usr/local/jdk

 

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

 

export PATH=$PATH:$JAVA_HOME/bin

 

[root@linux-host1 ~]# source  /etc/profile

 

[root@linux-host1 ~]# java -version

 

java version "1.8.0_121" #確承認以出現當前的java版本號

 

Java(TM) SE Runtime Environment (build 1.8.0_121-b13)

 

Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

 

 

 

1.3:官網下載elasticsearch並安裝:

 

下載地址:https://www.elastic.co/downloads/elasticsearch,當前最新版本5.3.0

 

1.3.1:兩臺服務器分別安裝elasticsearch:

 

[root@linux-host1 ~]# yum –y  localinstall elasticsearch-5.3.0.rpm

 

 

 

 

 

1.3.2:編輯各elasticsearch服務器的服務配置文件:

 

[root@linux-host1 ~]# grep "^[a-Z]"   /etc/elasticsearch/elasticsearch.yml

 

cluster.name: ELK-Cluster #ELK的集羣名稱,名稱相同即屬因而同一個集羣

 

node.name: elk-node1 #本機在集羣內的節點名稱

 

path.data: /elk/data  #數據保存目錄

 

path.logs: /elk/logs   #日誌保存目

 

bootstrap.memory_lock: true #服務啓動的時候鎖定足夠的內存,防止數據寫入swap

 

network.host: 0.0.0.0 #監聽IP

 

http.port: 9200

 

discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12"]

 

1.3.3:修改內存限制,並同步配置文件:

 

[root@linux-host1 ~]# vim /usr/lib/systemd/system/elasticsearch.service #修改內存限制

 

LimitMEMLOCK=infinity  #去掉註釋

 

[root@linux-host1 ~]# vim /etc/elasticsearch/jvm.options

 

22 -Xms2g

 

23 -Xmx2g #最小和最大內存限制,爲何最小和最大設置同樣大?

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

 

#官方配置文檔最大建議30G之內。

 

 

 

#將以上配置文件scp到host2並修改本身的node名稱

 

[root@linux-host1~]#scp /etc/elasticsearch/elasticsearch.yml  192.168.56.12:/etc/elasticsearch/

 

[root@linux-host2 ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml

 

cluster.name: ELK-Cluster

 

node.name: elk-node2  #與host1不能相同

 

path.data: /data/elk

 

path.logs: /data/elk

 

bootstrap.memory_lock: true

 

network.host: 0.0.0.0

 

http.port: 9200

 

discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12"]

 

1.3.4:目錄權限更改:

 

各服務器建立數據和日誌目錄並修改目錄權限爲elasticsearch:

 

[root@linux-host1 ~]# mkdir /elk/{data,logs}

 

[root@linux-host1 ~]# ll /elk/

 

total 0

 

drwxr-xr-x 2 root root 6 Apr 18 18:44 data

 

drwxr-xr-x 2 root root 6 Apr 18 18:44 logs

 

[root@linux-host1 ~]# chown  elasticsearch.elasticsearch /elk/ -R

 

[root@linux-host1 ~]# ll /elk/

 

total 0

 

drwxr-xr-x 2 elasticsearch elasticsearch 6 Apr 18 18:44 data

 

drwxr-xr-x 2 elasticsearch elasticsearch 6 Apr 18 18:44 logs

 

1.3.5:啓動elasticsearch服務並驗證:

 

[root@linux-host1 ~]# systemctl  restart elasticsearch

 

[root@linux-host1 ~]# tail -f /elk/logs/ELK-Cluster.log

 

[root@linux-host1 ~]# tail -f /elk/logs/

 

1.3.6:驗證端口監聽成功:

 

 

 

1.3.7:經過瀏覽器訪問elasticsearch服務端口:

 

 

 

1.4:安裝elasticsearch插件之head:

 

插件是爲了完成不一樣的功能,官方提供了一些插件但大部分是收費的,另外也有一些開發愛好者提供的插件,能夠實現對elasticsearch集羣的狀態監控與管理配置等功能。

 

1.4.1:安裝5.x版本的head插件:

 

在elasticsearch 5.x版本之後再也不支持直接安裝head插件,而是須要經過啓動一個服務方式,git地址:https://github.com/mobz/elasticsearch-head

 

[root@linux-host1 ~]# yum install -y npm

 

# NPM的全稱是Node Package Manager,是隨同NodeJS一塊兒安裝的包管理和分發工具,它很方便讓JavaScript開發者下載、安裝、上傳以及管理已經安裝的包。

 

[root@linux-host1 ~]# cd /usr/local/src/

 

[root@linux-host1 src]#git clone git://github.com/mobz/elasticsearch-head.git

 

[root@linux-host1 src]# cd elasticsearch-head/

 

[root@linux-host1 elasticsearch-head]# yum install npm -y

 

[root@linux-host1 elasticsearch-head]# npm install grunt -save

 

[root@linux-host2 elasticsearch-head]# ll node_modules/grunt  #確認生成文件

 

[root@linux-host1 elasticsearch-head]# npm install #執行安裝

 

 

 

[root@linux-host1 elasticsearch-head]# npm run start  &  #後臺啓動服務

 

1.4.1.1:修改elasticsearch服務配置文件:

 

開啓跨域訪問支持,而後重啓elasticsearch服務:

 

[root@linux-host1 ~]# vim /etc/elasticsearch/elasticsearch.yml

 

http.cors.enabled: true #最下方添加

 

http.cors.allow-origin: "*"

 

[root@linux-host1 ~]# /etc/init.d/elasticsearch  restart

 

1.4.1.2:docker版本啓動head插件:

 

[root@linux-host1 ~]# yum install docker -y

 

[root@linux-host1 ~]# systemctl  start docker && systemctl  enable docker

 

[root@linux-host1 ~]# docker run -d  -p 9100:9100 mobz/elasticsearch-head:5

 

 

 

而後從新鏈接:

 

 

 

1.4.1.3:測試提交數據:

 

 

 

1.4.1.4:驗證索引是否存在:

 

 

 

1.4.1.5:查看數據:

 

 

 

1.4.1.6:Master與Slave的區別:

 

Master的職責:

 

統計各node節點狀態信息、集羣狀態信息統計、索引的建立和刪除、索引分配的管理、關閉node節點等

 

Slave的職責:

 

同步數據、等待機會成爲Master

 

1.4.1.7:導入本地的docker鏡像:

 

[root@linux-host2 ~]# docker save docker.io/mobz/elasticsearch-head > /opt/elasticsearch-head-docker.tar.gz #導出鏡像

 

[root@linux-host1 src]# docker load < /opt/elasticsearch-head-docker.tar.gz #導入

 

[root@linux-host1 src]# docker images#驗證

 

REPOSITORY                          TAG                 IMAGE ID            CREATED             SIZE

 

docker.io/mobz/elasticsearch-head   5                   b19a5c98e43b        4 months ago        823.9 MB

 

[root@linux-host1 src]# docker run -d  -p 9100:9100 --name elastic docker.io/mobz/elasticsearch-head:5  #從本地docker images 啓動容器

 

 

 

1.4.2:elasticsearch插件之kopf:

 

1.4.2.1:kopf:

 

Git地址爲https://github.com/lmenezes/elasticsearch-kopf,可是目前還不支持5.x版本的elasticsearch,可是能夠安裝在elasticsearc 1.x或2.x的版本安裝。

 

1.5:監控elasticsearch集羣狀態:

 

1.5.1:經過shell命令獲取集羣狀態:

 

#curl –sXGET  http://192.168.56.11:9200/_cluster/health?pretty=true

 

 

 

#獲取到的是一個json格式的返回值,那就能夠經過python對其中的信息進行分析,例如對status進行分析,若是等於green(綠色)就是運行在正常,等於yellow(黃色)表示副本分片丟失,red(紅色)表示主分片丟失

 

1.5.2:python腳本:

 

[root@linux-host1 ~]# cat  els-cluster-monitor.py

 

#!/usr/bin/env python

 

#coding:utf-8

 

#Author Zhang Jie

 

 

 

import smtplib

 

from email.mime.text import MIMEText

 

from email.utils import formataddr

 

import subprocess

 

body = ""

 

false="false"

 

obj = subprocess.Popen(("curl -sXGET http://192.168.56.11:9200/_cluster/health?pretty=true"),shell=True, stdout=subprocess.PIPE)

 

data =  obj.stdout.read()

 

data1 = eval(data)

 

status = data1.get("status")

 

if status == "green":

 

    print "50"

 

else:

 

print "100"

 

1.5.3:腳本執行結果:

 

[root@linux-host1 ~]# python els-cluster-monitor.py

 

50

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

二:部署logstash:

 

2.1:logstash環境準備及安裝:

 

Logstash是一個開源的數據收集引擎,能夠水平伸縮,並且logstash整個ELK當中擁有最多插件的一個組件,其能夠接收來自不一樣來源的數據並統一輸出到指定的且能夠是多個不一樣目的地。

 

2.1.1:環境準備:

 

關閉防火牆和selinux,而且安裝java環境

 

[root@linux-host3 ~]# systemctl  stop firewalld

 

[root@linux-host3 ~]# systemctl  disable  firewalld

 

[root@linux-host3 ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

 

[root@linux-host3 ~]# yum install  jdk-8u121-linux-x64.rpm

 

[root@linux-host3 ~]# java -version

 

java version "1.8.0_121"

 

Java(TM) SE Runtime Environment (build 1.8.0_121-b13)

 

Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

 

[root@linux-host3 ~]# reboot

 

2.1.2:安裝logstash:

 

[root@linux-host3 ~]# yum install logstash-5.3.0.rpm

 

[root@linux-host3 ~]# chown  logstash.logstash /usr/share/logstash/data/queue –R #權限更改成logstash用戶和組,不然啓動的時候日誌報錯

 

2.2:測試logstash:

 

2.2.1:測試標準輸入和輸出:

 

[root@linux-host3 ~]# /usr/share/logstash/bin/logstash   -e 'input {  stdin{} } output { stdout{  codec => rubydebug }}'  #標準輸入和輸出

 

hello

 

{

 

    "@timestamp" => 2017-04-20T02:30:01.600Z, #當前事件的發生時間,

 

      "@version" => "1", #事件版本號,一個事件就是一個ruby對象

 

          "host" => "linux-host3.exmaple.com", #標記事件發生在哪裏

 

       "message" => "hello"  #消息的具體內容

 

}

 

2.2.2:測試輸出到文件:

 

[root@linux-host3 ~]# /usr/share/logstash/bin/logstash   -e 'input {  stdin{} } output { file { path => "/tmp/log-%{+YYYY.MM.dd}messages.gz"}}'

 

hello

 

11:01:15.229 [[main]>worker1] INFO  logstash.outputs.file - Opening file {:path=>"/tmp/log-2017-04-20messages.gz"}

 

[root@linux-host3 ~]# tail /tmp/log-2017-04-20messages.gz #打開文件驗證

 

 

 

2.2.3:測試輸出到elasticsearch:

 

[root@linux-host3 ~]# /usr/share/logstash/bin/logstash   -e 'input {  stdin{} } output { elasticsearch {hosts => ["192.168.56.11:9200"] index => "mytest-%{+YYYY.MM.dd}" }}'

 

 

 

2.2.4:elasticsearch服務器驗證收到數據:

 

[root@linux-host1 ~]# ll /elk/data/nodes/0/indices/

 

total 0

 

drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 19 19:08 JbnPSBGxQ_WbxT8jF5-TLw

 

drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 19 20:18 kZk1UbsjTliYfooevuQVdQ

 

drwxr-xr-x 4 elasticsearch elasticsearch 27 Apr 19 19:24 m6EiWqngS0C1bspg8JtmBg

 

drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 20 08:49 YhtJ1dEXSOa0YEKhe6HW8w

 

 

 

 

 

 

 

 

 

收集日誌不能有type參數

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

三:kibana部署及日誌收集:

 

Kibana是一個經過調用elasticsearch服務器進行圖形化展現搜索結果的開源項目。

 

3.1:安裝並配置kibana:

 

能夠經過rpm包或者二進制的方式進行安裝

 

3.1.1:rpm方式:

 

[root@linux-host1 ~]# yum localinstall kibana-5.3.0-x86_64.rpm

 

[root@linux-host1 ~]# grep -n "^[a-Z]" /etc/kibana/kibana.yml

 

2:server.port: 5601 #監聽端口

 

7:server.host: "0.0.0.0" #監聽地址

 

21:elasticsearch.url: http://192.168.56.11:9200 #elasticsearch服務器地址

 

3.1.2:啓動kibana服務並驗證:

 

[root@linux-host1 ~]# systemctl  start kibana

 

[root@linux-host1 ~]# systemctl  enable  kibana

 

[root@linux-host1 ~]# ss -tnl | grep 5601

 

3.1.3:查看狀態:

 

http://192.168.56.11:5601/status

 

 

 

3.1.3:添加上一步寫入的索引:

 

 

 

3.1.4:kibana驗證數據:

 

若是默認沒有顯示柱狀的圖,多是最近沒有寫入新的數據,能夠查看較長日期當中的數據或者經過logstash新寫入數據便可:

 

 

 

3.1.5:查看head插件顯示的索引狀態:

 

 

 

 

 

四:經過logstash收集日誌:

 

4.1:收集單個系統日誌並輸出至文件:

 

前提須要logstash用戶對被收集的日誌文件有讀的權限並對寫入的文件有寫權限。

 

4.1.1:logstash配置文件:

 

[root@linux-host3 ~]# cat /etc/logstash/conf.d/system-log.conf

 

input {

 

  file {

 

    type => "messagelog"

 

    path => "/var/log/messages"

 

start_position => "beginning" #第一次從頭收集,以後重新添加的日誌收集

 

 

 

  }

 

}

 

 

 

output {

 

  file {

 

    path => "/tmp/%{type}.%{+yyyy.MM.dd}"

 

  }

 

}

 

4.1.2:檢測配置文件語法是否正確:

 

[root@linux-host2 ~# /usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/syslog.conf   –t

 

 

 

4.1.3:生成數據並驗證:

 

[root@linux-host3 ~]# echo "test" >> /var/log/messages

 

[root@linux-host3 ~]# tail /tmp/messagelog.2017.04.20  #驗證是否生成文件

 

{"path":"/var/log/messages","@timestamp":"2017-04-20T07:12:16.001Z","@version":"1","host":"linux-host3.exmaple.com","message":"test","type":"messagelog"}

 

4.1.4:查看logstash日誌,確認有權限收集日誌:

 

 

 

4.1.5:受權讀取文件:

 

[root@linux-host2 ~]# chmod  644 /var/log/messages

 

 

 

4.2:經過logstash收集多個日誌文件:

 

4.2.1:Logstash配置:

 

[root@linux-host3 logstash]# cat /etc/logstash/conf.d/system-log.conf

 

input {

 

  file {

 

    path => "/var/log/messages" #日誌路徑

 

    type => "systemlog" #事件的惟一類型

 

    start_position => "beginning" #第一次收集日誌的位置

 

    stat_interval => "3" #日誌收集的間隔時間

 

  }

 

  file {

 

    path => "/var/log/secure"

 

    type => "securelog"

 

    start_position => "beginning"

 

    stat_interval => "3"

 

  }

 

}

 

 

 

output {

 

  if [type] == "systemlog" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "system-log-%{+YYYY.MM.dd}"

 

    }}

 

  if [type] == "securelog" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "secury-log-%{+YYYY.MM.dd}"

 

    }}   

 

}

 

4.2.2:重啓logstash並查看日誌是否有報錯:

 

[root@linux-host3 ~]# chmod  644 /var/log/secure

 

[root@linux-host3 ~]# chmod  644 /var/log/messages

 

[root@linux-host3 logstash]# systemctl  restart logstash

 

 

 

4.2.3:向被收集的文件中寫入數據:

 

[root@linux-host3 logstash]# echo "test" >> /var/log/secure

 

[root@linux-host3 logstash]# echo "test" >> /var/log/messages

 

4.2.4:在kibana界面添加system-messages索引:

 

 

 

4.2.5:在kibana界面添加secure-messages索引:

 

 

 

4.2.6:kibana展現system-messages:

 

 

 

4.2.7:kibana展現secure-messages:

 

 

 

 

 

4.3:經過logtsash收集tomcat和java日誌:

 

收集Tomcat服務器的訪問日誌以及Tomcat錯誤日誌進行實時統計,在kibana頁面進行搜索展示,每臺Tomcat服務器要安裝logstash負責收集日誌,而後將日誌轉發給elasticsearch進行分析,在經過kibana在前端展示,配置過程以下:

 

4.3.1:服務器部署tomcat服務:

 

須要安裝java環境,並自定義一個web界面進行測試。

 

4.3.1.1:配置java環境並部署tomcat:

 

[root@linux-host6 ~]# yum install jdk-8u121-linux-x64.rpm

 

[root@linux-host6 ~]# cd  /usr/local/src/

 

[root@linux-host6 src]# tar xvf apache-tomcat-8.0.38.tar.gz

 

[root@linux-host6 src]# ln -sv /usr/local/src/apache-tomcat-8.0.38 /usr/local/tomcat

 

‘/usr/local/tomcat’ -> ‘/usr/local/src/apache-tomcat-8.0.38’

 

[root@linux-host6 tomcat]# cd /usr/local/tomcat/webapps/

 

[root@linux-host6 webapps]#mkdir /usr/local/tomcat/webapps/webdir

 

[root@linux-host6 webapps]# echo "Tomcat Page" > /usr/local/tomcat/webapps/webdir/index.html

 

[root@linux-host6 webapps]# ../bin/catalina.sh  start

 

[root@linux-host6 webapps]# ss -tnl | grep 8080

 

LISTEN     0      100         :::8080                    :::*

 

4.3.1.2:確認web能夠訪問:

 

 

 

4.3.1.3:tomcat日誌轉json:

 

[root@linux-host6 tomcat]# vim conf/server.xml

 

        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"

 

               prefix="tomcat_access_log" suffix=".log"

 

               pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>

 

[root@linux-host6 tomcat]# ./bin/catalina.sh  stop

 

[root@linux-host6 tomcat]# rm -rf  logs/*  #刪除或清空以前的訪問日誌

 

[root@linux-host6 tomcat]# ./bin/catalina.sh  start  #啓動並訪問tomcat界面

 

[root@linux-host6 tomcat]# tail -f logs/localhost_access_log.2017-04-20.txt

 

 

 

 

 

4.3.1.4:驗證日誌是否json格式:

 

http://www.kjson.com/

 

 

 

4.3.1.5:如何獲取日誌行中的IP?:

 

Python 腳本解析:

 

#!/usr/bin/env python

 

#coding:utf-8

 

#Author Zhang Jie

 

 

 

data ={"clientip":"192.168.56.1","ClientUser":"-","authenticated":"-","AccessTime":"[20/May/2017:21:46:22 +0800]","method":"GET /webdir/ HTTP/1.1","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"}

 

ip=data["clientip"]

 

print ip

 

4.3.2:在tomcat服務器安裝logstash收集tomcat和系統日誌:

 

須要部署tomcat並安裝配置logstash

 

4.3.2.1:安裝配置logstash:

 

[root@linux-host6 ~]# yum install logstash-5.3.0.rpm  -y

 

[root@linux-host6 ~]# vim /etc/logstash/conf.d/tomcat.conf

 

[root@linux-host6 ~]# cat /etc/logstash/conf.d/tomcat.conf

 

input {

 

  file {

 

    path => "/usr/local/tomcat/logs/localhost_access_log.*.txt"

 

    start_position => "end"

 

    type => "tomct-access-log"

 

  }

 

  file {

 

    path => "/var/log/messages"

 

    start_position => "end"

 

    type => "system-log"

 

 }

 

}

 

 

 

output {

 

  if [type] == "tomct-access-log" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "logstash-tomcat-5616-access-%{+YYYY.MM.dd}"

 

      codec => "json"

 

  }}

 

 

 

  if [type] == "system-log" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"] #寫入到不通的ES服務器

 

      index => "system-log-5616-%{+YYYY.MM.dd}"

 

}}

 

}

 

4.3.2.2:重啓logstash並確認:

 

[root@linux-host6 ~]# systemctl  restart logstash #更改完配置文件重啓logstash

 

[root@linux-host6 ~]# tail  -f /var/log/logstash/logstash-plain.log #驗證日誌

 

 

 

[root@linux-host6 ~]# chmod  644 /var/log/messages  #修改權限

 

[root@linux-host6 ~]# systemctl  restart logstash #再次重啓logstash

 

4.3.2.3:訪問tomcat並生成日誌:

 

[root@linux-host6 ~]# echo "2017-02-21" >> /var/log/messages

 

4.3.2.4:訪問head插件驗證索引:

 

 

 

4.3.2.5:在kibana添加logstash-tomcat-5616-access-:

 

 

 

4.3.2.6:在kibana添加system-log-5616-:

 

 

 

4.3.2.7:驗證數據:

 

 

 

4.3.2.8:在其它服務器使用ab批量訪問並驗證數據:

 

[root@linux-host3 ~]# yum install httpd-tools –y

 

[root@linux-host3 ~]# ab -n1000 -c100 http://192.168.56.16:8080/webdir/

 

 

 

 

 

4.3.3:收集java日誌:

 

使用codec的multiline插件實現多行匹配,這是一個能夠將多行進行合併的插件,並且可使用what指定將匹配到的行與前面的行合併仍是和後面的行合併,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

 

4.3.3.1:在elasticsearch服務器部署logstash:

 

[root@linux-host1 ~]# chown  logstash.logstash /usr/share/logstash/data/queue -R

 

[root@linux-host1 ~]# ll -d /usr/share/logstash/data/queue

 

drwxr-xr-x 2 logstash logstash 6 Apr 19 20:03 /usr/share/logstash/data/queue

 

[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf

 

input {

 

        stdin {

 

        codec => multiline {

 

        pattern => "^\[" #當遇到[開頭的行時候將多行進行合併

 

        negate => true  #true爲匹配成功進行操做,false爲不成功進行操做

 

        what => "previous"  #與上面的行合併,若是是下面的行合併就是next

 

        }}

 

}

 

filter { #日誌過濾,若是全部的日誌都過濾就寫這裏,若是隻針對某一個過濾就寫在input裏面的日誌輸入裏面

 

}

 

output {

 

        stdout {

 

        codec => rubydebug

 

}}

 

4.3.3.2:測試能夠正常啓動:

 

[root@linux-host1 ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf

 

 

 

4.3.3.3:測試標準輸入和標準輸出:

 

 

 

4.3.3.4:配置讀取日誌文件寫入到文件:

 

[root@linux-host1 ~]# vim /etc/logstash/conf.d/java.conf

 

input {

 

  file {

 

    path => "/elk/logs/ELK-Cluster.log"

 

    type => "javalog"

 

    start_position => "beginning"

 

    codec => multiline {

 

    pattern => "^\["

 

    negate => true

 

    what => "previous"

 

  }}

 

}

 

 

 

output {

 

  if [type] == "javalog" {

 

  stdout {

 

      codec => rubydebug

 

    }

 

  file {

 

    path =>  "/tmp/m.txt"

 

  }}

 

}

 

4.3.3.5:語法驗證:

 

[root@linux-host1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf  -t

 

 

 

4.3.3.7:將輸出改成elasticsearch:

 

更改後的內容以下:

 

[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf

 

input {

 

  file {

 

    path => "/elk/logs/ELK-Cluster.log"

 

    type => "javalog"

 

    start_position => "beginning"

 

    codec => multiline {

 

    pattern => "^\["

 

    negate => true

 

    what => "previous"

 

  }}

 

}

 

 

 

output {

 

  if [type] == "javalog" {

 

  elasticsearch {

 

    hosts =>  ["192.168.56.11:9200"]

 

    index => "javalog-5611-%{+YYYY.MM.dd}"

 

  }}

 

}

 

[root@linux-host1 ~]# systemctl  restart logstash

 

而後重啓一下elasticsearch服務,目前是爲了生成新的日誌,以驗證logstash可否自動收集新生成的日誌。

 

[root@linux-host1 ~]# systemctl  restart elasticsearch

 

4.3.3.8:kibana界面添加javalog-5611索引:

 

 

 

4.3.3.9:生成數據:

 

[root@linux-host1 ~]# cat  /elk/logs/ELK-Cluster.log  >> /tmp/1

 

[root@linux-host1 ~]# cat /tmp/1  >> /elk/logs/ELK-Cluster.log

 

4.3.3.10:kibana界面查看數據:

 

 

 

4.3.3.11:關於sincedb:

 

[root@linux-host1~]# cat /var/lib/logstash/plugins/inputs/file/.sincedb_1ced15cfacdbb0380466be84d620085a

 

134219868 0 2064 29465 #記錄了收集文件的inode信息

 

[root@linux-host1 ~]# ll -li /elk/logs/ELK-Cluster.log

 

134219868 -rw-r--r-- 1 elasticsearch elasticsearch 29465 Apr 21 14:33 /elk/logs/ELK-Cluster.log

 

4.4:收集nginx訪問日誌:

 

4.4.1:部署nginx服務:

 

[root@linux-host6 ~]# yum install gcc gcc-c++ automake pcre pcre-devel zlip zlib-devel openssl openssl-devel

 

[root@linux-host6 ~]# cd /usr/local/src/

 

[root@linux-host6 src]# wget http://nginx.org/download/nginx-1.10.3.tar.gz

 

[root@linux-host6 src]# tar xvf  nginx-1.10.3.tar.gz

 

[root@linux-host6 src]# cd nginx-1.10.3

 

[root@linux-host6 nginx-1.10.3]# ./configure  --prefix=/usr/local/nginx-1.10.3

 

[root@linux-host6 nginx-1.10.3]# make && make install

 

[root@linux-host6 nginx-1.10.3]# ln -sv /usr/local/nginx-1.10.3 /usr/local/nginx

 

‘/usr/local/nginx’ -> ‘/usr/local/nginx-1.10.3’

 

4.4.2:編輯配置文件並準備web頁面:

 

[root@linux-host6 nginx-1.10.3]# cd /usr/local/nginx

 

[root@linux-host6 nginx]# vim conf/nginx.conf

 

48         location /web {

 

 49             root   html;

 

 50             index  index.html index.htm;

 

 51         }

 

[root@linux-host6 nginx]# mkdir  /usr/local/nginx/html/web

 

[root@linux-host6 nginx]# echo " Nginx WebPage! " > /usr/local/nginx/html/web/index.html

 

4.4.2:測試nginx配置:

 

/usr/local/nginx/sbin/nginx  -t #測試配置文件語法

 

/usr/local/nginx/sbin/nginx  #啓動服務

 

/usr/local/nginx/sbin/nginx  -s reload #重讀配置文件

 

4.4.3:啓動nginx並驗證:

 

[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx  -t

 

nginx: the configuration file /usr/local/nginx-1.10.3/conf/nginx.conf syntax is ok

 

nginx: configuration file /usr/local/nginx-1.10.3/conf/nginx.conf test is successful

 

[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx

 

[root@linux-host6 nginx]# lsof  -i:80

 

COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

 

nginx   17719   root    6u  IPv4  90721      0t0  TCP *:http (LISTEN)

 

nginx   17720 nobody    6u  IPv4  90721      0t0  TCP *:http (LISTEN)

 

4.4.4:訪問nginx頁面:

 

 

 

4.4.5:將nginx日誌轉換爲json格式:

 

[root@linux-host6 nginx]# vim  conf/nginx.conf

 

log_format access_json '{"@timestamp":"$time_iso8601",'

 

        '"host":"$server_addr",'

 

        '"clientip":"$remote_addr",'

 

        '"size":$body_bytes_sent,'

 

        '"responsetime":$request_time,'

 

        '"upstreamtime":"$upstream_response_time",'

 

        '"upstreamhost":"$upstream_addr",'

 

        '"http_host":"$host",'

 

        '"url":"$uri",'

 

        '"domain":"$host",'

 

        '"xff":"$http_x_forwarded_for",'

 

        '"referer":"$http_referer",'

 

        '"status":"$status"}';

 

    access_log  /var/log/nginx/access.log  access_json;

 

[root@linux-host6 nginx]# mkdir /var/log/nginx

 

[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx  -t

 

nginx: the configuration file /usr/local/nginx-1.10.3/conf/nginx.conf syntax is ok

 

nginx: configuration file /usr/local/nginx-1.10.3/conf/nginx.conf test is successful

 

4.4.6:確認日誌格式爲json:

 

[root@linux-host6 nginx]# tail  /var/log/nginx/access.log

 

{"@timestamp":"2017-04-21T17:03:09+08:00","host":"192.168.56.16","clientip":"192.168.56.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.16","url":"/web/index.html","domain":"192.168.56.16","xff":"-","referer":"-","status":"304"}

 

4.4.7:配置logstash收集nginx訪問日誌:

 

[root@linux-host6 conf.d]# vim nginx.conf

 

input {

 

  file {

 

    path => "/var/log/nginx/access.log"

 

    start_position => "end"

 

    type => "nginx-accesslog"

 

    codec => json

 

  }

 

}

 

 

 

 

 

output {

 

  if [type] == "nginx-accesslog" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "logstash-nginx-accesslog-5616-%{+YYYY.MM.dd}"

 

  }}

 

}

 

4.4.8:kibana界面添加索引:

 

 

 

4.4.9:kibana界面驗證數據:

 

 

 

4.5:收集TCP/UDP日誌

 

經過logstash的tcp/udp插件收集日誌,一般用於在向elasticsearch日誌補錄丟失的部分日誌,能夠將丟失的日誌經過一個TCP端口直接寫入到elasticsearch服務器。

 

4.5.1:;logstash配置文件,先進行收集測試:

 

[root@linux-host6 ~]# cat /etc/logstash/conf.d/tcp.conf

 

input {

 

  tcp {

 

    port => 9889

 

    type => "tcplog"

 

    mode => "server" 

 

  }

 

}

 

 

 

 

 

output {

 

  stdout {

 

    codec => rubydebug

 

  }

 

}

 

4.5.2:驗證端口啓動成功:

 

[root@linux-host6 src]# /usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/tcp.conf

 

 

 

4.5.3:在其餘服務器安裝nc命令:

 

NetCat簡稱nc,在網絡工具中有「瑞士軍刀」美譽,其功能實用,是一個簡單、可靠的網絡工具,可經過TCP或UDP協議傳輸讀寫數據,另外還具備不少其餘功能。

 

[root@linux-host1 ~]# yum instll nc –y

 

[root@linux-host1 ~]# echo "nc test" | nc 192.168.56.16 9889

 

4.5.4:驗證logstash是否接收到數據:

 

 

 

4.5.5:經過nc命令發送一個文件:

 

[root@linux-host1 ~]# nc 192.168.56.16 9889 < /etc/passwd

 

4.5.6:logstash驗證數據:

 

 

 

4.5.7:經過僞設備的方式發送消息:

 

在類Unix操做系統中,設備節點並不必定要對應物理設備。沒有這種對應關係的設備是僞設備。操做系統運用了它們提供的多種功能,tcp只是dev下面衆多僞設備當中的一種設備。

 

[root@linux-host1 ~]# echo "僞設備"  > /dev/tcp/192.168.56.16/9889

 

4.5.8:logstash驗證數據:

 

 

 

4.5.9:將輸出改成elasticsearch:

 

[root@linux-host6 conf.d]# vim /etc/logstash/conf.d/tcp.conf

 

input {

 

  tcp {

 

    port => 9889

 

    type => "tcplog"

 

    mode => "server"

 

  }

 

}

 

 

 

output {

 

  elasticsearch {

 

    hosts => ["192.168.56.11:9200"]

 

    index =>  "logstash-tcplog-%{+YYYY.MM.dd}"

 

  }

 

}

 

[root@linux-host6 conf.d]# systemctl  restart logstas

 

4.5.10:經過nc命令或僞設備輸入日誌:

 

[root@linux-host1 ~]# echo "僞設備1"  > /dev/tcp/192.168.56.16/9889

 

[root@linux-host1 ~]# echo "僞設備2"  > /dev/tcp/192.168.56.16/9889

 

4.5.11:在kibana界面添加索引:

 

 

 

4.5.12:驗證數據:

 

 

 

4.6:經過rsyslog收集haproxy日誌:

 

  在centos 6及以前的版本叫作syslog,centos 7開始叫作rsyslog,根據官方的介紹,rsyslog(2013年版本)能夠達到每秒轉發百萬條日誌的級別,官方網址:http://www.rsyslog.com/,確認系統安裝的版本命令以下:

 

[root@linux-host1 ~]# yum list syslog

 

Installed  Packages      rsyslog.x86_64   7.4.7-12.el7                                         

 

4.6.1:編譯安裝配置haproxy:

 

[root@linux-host2 ~]# cd /usr/local/src/

 

[root@linux-host2 src]# wget http://www.haproxy.org/download/1.7/src/haproxy-1.7.5.tar.gz

 

[root@linux-host2 src]# tar xvf  haproxy-1.7.5.tar.gz

 

[root@linux-host2 src]# cd haproxy-1.7.5

 

[root@linux-host2 src]# yum install gcc pcre pcre-devel openssl  openssl-devel -y [root@linux-host2 haproxy-1.7.5]#make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1  PREFIX=/usr/local/haproxy

 

[root@linux-host2 haproxy-1.7.5]#  make install PREFIX=/usr/local/haproxy

 

[root@linux-host2 haproxy-1.7.5]# /usr/local/haproxy/sbin/haproxy  -v #確認版本

 

HA-Proxy version 1.7.5 2017/04/03

 

Copyright 2000-2017 Willy Tarreau <willy@haproxy.org

 

準備啓動腳步:

 

[root@linux-host2 haproxy-1.7.5]#  vim /usr/lib/systemd/system/haproxy.service

 

[Unit]

 

Description=HAProxy Load Balancer

 

After=syslog.target network.target

 

 

 

[Service]

 

EnvironmentFile=/etc/sysconfig/haproxy

 

ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS

 

ExecReload=/bin/kill -USR2 $MAINPID

 

 

 

[Install]

 

WantedBy=multi-user.target

 

 

 

[root@linux-host2 haproxy-1.7.5]# cp /usr/local/src/haproxy-1.7.5/haproxy-systemd-wrapper  /usr/sbin/

 

[root@linux-host2 haproxy-1.7.5]# cp /usr/local/src/haproxy-1.7.5/haproxy  /usr/sbin/

 

 

 

[root@linux-host2 haproxy-1.7.5]# vim /etc/sysconfig/haproxy #系統級配置文件

 

# Add extra options to the haproxy daemon here. This can be useful for

 

# specifying multiple configuration files with multiple -f options.

 

# See haproxy(1) for a complete list of options.

 

OPTIONS=""

 

 

 

 

 

[root@linux-host2 haproxy-1.7.5]# mkdir /etc/haproxy

 

[root@linux-host2 haproxy-1.7.5]# cat  /etc/haproxy/haproxy.cfg

 

global

 

maxconn 100000

 

chroot /usr/local/haproxy

 

uid 99

 

gid 99

 

daemon

 

nbproc 1

 

pidfile /usr/local/haproxy/run/haproxy.pid

 

log 127.0.0.1 local6 info

 

 

 

defaults

 

option http-keep-alive

 

option  forwardfor

 

maxconn 100000

 

mode http

 

timeout connect 300000ms

 

timeout client  300000ms

 

timeout server  300000ms

 

 

 

listen stats

 

 mode http

 

 bind 0.0.0.0:9999

 

 stats enable

 

 log global

 

 stats uri     /haproxy-status

 

 stats auth    haadmin:123456

 

 

 

#frontend web_port

 

frontend web_port

 

        bind 0.0.0.0:80

 

        mode http

 

        option httplog

 

        log global

 

        option  forwardfor

 

###################ACL Setting##########################

 

        acl pc          hdr_dom(host) -i www.elk.com

 

        acl mobile      hdr_dom(host) -i m.elk.com

 

###################USE ACL##############################

 

        use_backend     pc_host        if  pc

 

        use_backend     mobile_host    if  mobile

 

########################################################

 

 

 

backend pc_host

 

        mode    http

 

        option  httplog

 

        balance source

 

        server web1  192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1

 

 

 

backend mobile_host

 

        mode    http

 

        option  httplog

 

        balance source

 

        server web1  192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1

 

 

 

4.6.2:編輯rsyslog服務配置文件:

 

$ModLoad imudp

 

$UDPServerRun 514

 

 

 

$ModLoad imtcp

 

$InputTCPServerRun 514  #去掉15/16/19/20行前面的註釋

 

 

 

local6.*     @@192.168.56.11:5160   #最後面一行添加,local6對應haproxy配置文件定義的local級別

 

 

 

4.6.3:從新啓動haproxy和rsyslog服務:

 

[root@linux-host2 ~]# systemctl  enable  haproxy

 

[root@linux-host2 ~]# systemctl  restart haproxy

 

[root@linux-host2 ~]# systemctl  restart rsyslog

 

4.6.4:驗證haproxy端口及服務:

 

 

 

確認服務進程已經存在:

 

 

 

4.6.5:更改本地host文件:

 

C:\Windows\System32\drivers\etc

 

192.168.56.12 www.elk.com

 

192.168.56.12 m.elk.com

 

4.6.6:測試域名及訪問:

 

 

 

啓動後端web服務器的nginx:

 

[root@linux-host1 ~]# /usr/local/nginx/sbin/nginx

 

確承認以訪問到nginx的web界面:

 

 

 

 

 

4.6.7:編輯logstash配置文件:

 

配置logstash監聽一個本地端口做爲日誌輸入源,haproxy服務器的rsyslog輸出IP和端口要等同於logstash服務器監聽的IP:端口,本次的配置是在Host1上開啓logstash,在Host2上收集haproxy的訪問日誌並轉發至Host1服務器的logstash進行處理,logstash的配置文件以下:

 

[root@linux-host1 conf.d]# cat /etc/logstash/conf.d/rsyslog.conf

 

input{

 

      syslog {

 

        type => "system-rsyslog-haproxy5612"

 

        port => "5160"  #監聽一個本地的端口

 

}}

 

 

 

output{

 

        stdout{

 

                codec => rubydebug

 

}}

 

4.6.8:經過-f命令測試logstash:

 

[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslog.conf

 

 

 

4.6.9:web訪問haproxy並驗證數據:

 

添加本地解析:

 

[root@linux-host1 ~]# tail –n2  /etc/hosts

 

192.168.56.12 www.elk.com

 

192.168.56.12 m.elk.com

 

 

 

[root@linux-host1 ~]# curl  http://www.elk.com/nginxweb/index.html

 

 

 

4.6.10:訪問haproxy管理界面:

 

 

 

4.6.11:haproxy管理界面:

 

 

 

4.6.12:驗證logstash輸出:

 

 

 

4.6.13:將輸出改成elasticsearch:

 

[root@linux-host1 conf.d]# cat  /etc/logstash/conf.d/rsyslog.conf

 

input{

 

      syslog {

 

        type => "ststem-rsyslog"

 

        port => "516"

 

}}

 

 

 

output{

 

  elasticsearch {

 

    hosts => ["192.168.56.11:9200"]

 

    index =>  "logstash-rsyslog-%{+YYYY.MM.dd}"

 

  }

 

}

 

[root@linux-host6 conf.d]# systemctl  restart logstash

 

4.6.14:web訪問haproxy以生成新日誌:

 

 

 

訪問head插件以確認生成index:

 

 

 

4.6.15:kibana界面添加索引:

 

 

 

4.6.16:kibana驗證數據:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.7:logstash收集日誌並寫入redis:

 

用一臺服務器按照部署redis服務,專門用於日誌緩存使用,用於web服務器產生大量日誌的場景,例以下面的服務器內存即將被使用完畢,查看是由於redis服務保存了大量的數據沒有被讀取而佔用了大量的內存空間。

 

 

 

總體架構:

 

 

 

4.7.1:部署redis:

 

[root@linux-host2 ~]# cd /usr/local/src/

 

[root@linux-host2 src]#  

 

[root@linux-host2 src]# tar  xvf redis-3.2.8.tar.gz

 

[root@linux-host2 src]# ln -sv /usr/local/src/redis-3.2.8 /usr/local/redis

 

‘/usr/local/redis’ -> ‘/usr/local/src/redis-3.2.8’

 

[root@linux-host2 src]#cd  /usr/local/redis/deps

 

[root@linux-host2 redis]# yum install gcc

 

[root@linux-host2 deps]# make geohash-int hiredis jemalloc linenoise lua

 

[root@linux-host2 deps]# cd ..

 

[root@linux-host2 redis]# make

 

[root@linux-host2 redis]# vim  redis.conf

 

[root@linux-host2 redis]# grep "^[a-Z]" redis.conf  #主要改動的地方

 

bind 0.0.0.0

 

protected-mode yes

 

port 6379

 

tcp-backlog 511

 

timeout 0

 

tcp-keepalive 300

 

daemonize yes

 

supervised no

 

pidfile /var/run/redis_6379.pid

 

loglevel notice

 

logfile ""

 

databases 16

 

save ""

 

rdbcompression no  #是否壓縮

 

rdbchecksum no  #是否校驗

 

[root@linux-host2 redis]# ln -sv /usr/local/redis/src/redis-server  /usr/bin/

 

‘/usr/bin/redis-server’ -> ‘/usr/local/redis/src/redis-server’

 

[root@linux-host2 redis]# ln -sv /usr/local/redis/src/redis-cli  /usr/bin/

 

‘/usr/bin/redis-cli’ -> ‘/usr/local/redis/src/redis-cli’

 

 

 

4.7.2:設置redis訪問密碼:

 

爲安全考慮,生產環境必須設置reids鏈接密碼:

 

[root@linux-host2 redis]# redis-cli

 

127.0.0.1:6379> config set requirepass 123456  #動態設置,重啓後無效

 

OK

 

480 requirepass  123456  #redis.conf配置文件

 

 

 

4.7.3:啓動並測試redis服務:

 

[root@linux-host2 redis]# redis-server  /usr/local/redis/redis.conf #啓動服務

 

[root@linux-host2 redis]# redis-cli

 

127.0.0.1:6379> ping

 

PONG

 

4.7.4:配置logstash將日誌寫入至redis:

 

將tomcat服務器的logstash收集以後的tomcat 訪問日誌寫入到redis服務器,而後經過另外的logstash將redis服務器的數據取出在寫入到elasticsearch服務器。

 

官方文檔:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html

 

 

 

[root@linux-host2 tomcat]# cat /etc/logstash/conf.d/tomcat_tcp.conf

 

input {

 

  file {

 

    path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"

 

    type => "tomcat-accesslog-5612"

 

    start_position => "beginning"

 

    stat_interval => "2"

 

  }

 

  tcp {

 

    port => 7800

 

    mode => "server"

 

    type => "tcplog-5612"

 

  }

 

}

 

 

 

output {

 

  if [type] == "tomcat-accesslog-5612" {

 

    redis {

 

      data_type => "list"

 

      key => "tomcat-accesslog-5612"

 

      host => "192.168.56.12"

 

      port => "6379"

 

      db => "0"

 

      password => "123456"

 

 }}

 

  if [type] == "tcplog-5612" {

 

    redis {

 

      data_type => "list"

 

      key => "tcplog-5612"

 

      host => "192.168.56.12"

 

      port => "6379"

 

      db => "1"

 

      password => "123456"

 

}}

 

}

 

4.7.5:測試logstash配置文件語法是否正確:

 

[root@linux-host2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat.conf

 

 

 

 

 

4.7.6:訪問tomcat的web界面並生成系統日誌:

 

 

 

[root@linux-host1 ~]# echo "僞設備1"  > /dev/tcp/192.168.56.12/7800

 

 

 

4.7.7:驗證redis是否有數據:

 

 

 

4.7.8:配置其餘logstash服務器從redis讀取數據:

 

配置專門logstash服務器從redis讀取指定的key的數據,並寫入到elasticsearch。

 

[root@linux-host3 ~]# cat /etc/logstash/conf.d/redis-to-els.conf

 

[root@linux-host1 conf.d]# cat /etc/logstash/conf.d/redis-tomcat-es.conf

 

input {

 

  redis {

 

    data_type => "list"

 

    key => "tomcat-accesslog-5612"

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    password => "123456"

 

    codec => "json"

 

  }

 

 

 

  redis {

 

    data_type => "list"

 

    key => "tcplog-5612"

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    password => "123456"

 

  }

 

}

 

 

 

output {

 

  if [type] == "tomcat-accesslog-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "logstash-tomcat5612-accesslog-%{+YYYY.MM.dd}"

 

}}

 

 

 

  if [type] == "tcplog-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "logstash-tcplog5612-%{+YYYY.MM.dd}"

 

}}

 

}

 

 

 

4.7.9:測試logstash:

 

[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-to-els.conf

 

 

 

4.7.10:驗證redis的數據是否被取出:

 

 

 

4.7.11:在head插件驗證數據:

 

 

 

4.7.12:kibana添加tomcat訪問日誌索引:

 

 

 

4.7.13:kibana添加tcp日誌索引:

 

 

 

4.7.14:kibana驗證tomcat訪問日誌:

 

 

 

4.7.15:kibana 驗證tcp日誌:

 

 

 

#注:測試沒有問題以後,請將logstash使用服務的方式正常啓動

 

 

 

 

 

4.8:使用filebeat替代logstash收集日誌:

 

Filebeat是輕量級單用途的日誌收集工具,用於在沒有安裝java的服務器上專門收集日誌,能夠將日誌轉發到logstash、elasticsearch或redis等場景中進行下一步處理。

 

官網下載地址:https://www.elastic.co/downloads/beats/filebeat

 

官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html

 

4.8.1:確認日誌格式爲json格式:

 

先訪問web服務器,以產生必定的日誌,而後確認是json格式,由於下面的課程中會使用到:

 

[root@linux-host2 ~]# ab -n100 -c100 http://192.168.56.16:8080/web

 

4.8.2:確認日誌格式,後續會用日誌作統計:

 

[root@linux-host2 ~]# tail  /usr/local/tomcat/logs/localhost_access_log.2017-04-28.txt

 

{"clientip":"192.168.56.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

 

{"clientip":"192.168.56.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}

 

4.8.3:安裝配置filebeat:

 

[root@linux-host2 ~]# systemctl  stop logstash  #中止logstash服務(若是有安裝)

 

[root@linux-host2 src]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.2-x86_64.rpm

 

[root@linux-host6 src]# yum install filebeat-5.3.2-x86_64.rpm  -y

 

4.8.4:配置filebeat收集系統日誌:

 

[root@linux-host2 ~]# cd /etc/filebeat/

 

[root@linux-host2 filebeat]# cp filebeat.yml  filebeat.yml.bak #備份源配置文件

 

 

 

4.8.4.1:filebeat收集多個系統日誌並輸出到本地文件:

 

[root@linux-host2 ~]# grep -v "#"  /etc/filebeat/filebeat.yml | grep -v "^$"

 

grep -v "#"  /etc/filebeat/filebeat.yml | grep -v "^$"

 

filebeat.prospectors:

 

- input_type: log

 

  paths:

 

    - /var/log/messages

 

    - /var/log/*.log

 

  exclude_lines: ["^DBG","^$"] #不收取的

 

  #include_lines: ["^ERR", "^WARN"]  #只收取的

 

     f #類型,會在每條日誌中插入標記

 

output.file:

 

  path: "/tmp"

 

  filename: "filebeat.txt"

 

4.8.4.2:啓動filebeat服務並驗證本地文件是否有數據:

 

[root@linux-host2 filebeat]# systemctl  start filebeat

 

 

 

 

 

4.8.5:filebeat收集單個類型日誌並寫入redis:

 

Filebeat支持將數據直接寫入到redis服務器,本步驟爲寫入到redis當中的一個能夠,另外filebeat還支持寫入到elasticsearch、logstash等服務器。

 

4.8.5.1:filebeat配置:

 

[root@linux-host2 ~]# grep -v "#"  /etc/filebeat/filebeat.yml | grep -v "^$"

 

filebeat.prospectors:

 

- input_type: log

 

  paths:

 

    - /var/log/messages

 

    - /var/log/*.log

 

  exclude_lines: ["^DBG","^$"]

 

  document_type: system-log-5612

 

 

 

output.redis:

 

  hosts: ["192.168.56.12:6379"]

 

  key: "system-log-5612"  #爲了後期日誌處理,建議自定義key名稱

 

  db: 1  #使用第幾個庫

 

  timeout: 5  #超時時間

 

  password: 123456 #redis密碼

 

4.8.5.2:驗證redis是否有數據:

 

 

 

4.8.5.3:查看redis中的日誌數據:

 

注意選擇的db是否和filebeat寫入一致

 

 

 

4.8.5.4:配置logstash從redis讀取上面的日誌:

 

[root@linux-host1 ~]# cat   /etc/logstash/conf.d/redis-systemlog-es.conf

 

input {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

 }

 

}

 

 

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.11:9200"]

 

      index => "system-log-5612"

 

}}

 

}

 

 

 

[root@linux-host1 ~]# systemctl  restart logstash #重啓logstash服務

 

4.8.5.5:查看logstash服務日誌:

 

 

 

 

 

 

 

 

 

4.8.8.6:查看redis中是否有數據:

 

 

 

 

 

4.8.8.7:在logstash的head插件驗證索引是否建立:

 

 

 

4.8.5.8:kibana界面添加索引:

 

 

 

4.8.5.9:在kibana驗證system日誌:

 

 

 

4.8.6::監控redis數據長度:

 

實際環境當中,可能會出現reids當中堆積了大量的數據而logstash因爲種種緣由未能及時提取日誌,此時會致使redis服務器的內存被大量使用,甚至出現以下內存即將被使用完畢的情景:

 

 

 

查看reids中的日誌隊列長度發現有大量的日誌堆積在redis 當中:

 

 

 

4.8.6.1:腳本內容:

 

 

 

#!/usr/bin/env python

 

#coding:utf-8

 

#Author Zhang jie

 

import redis

 

def redis_conn():

 

    pool=redis.ConnectionPool(host="192.168.56.12",port=6379,db=0,password=123456)

 

    conn = redis.Redis(connection_pool=pool)

 

    data = conn.llen('tomcat-accesslog-5612')

 

    print(data)

 

redis_conn()

 

 

 

 

 

 

 

 

 

 

 

4.9:日誌收集實戰:

 

4.9.1:架構規劃:

 

在下面的圖當中從左向右看,當要訪問ELK日誌統計平臺的時候,首先訪問的是兩臺nginx+keepalived作的負載高可用,訪問的地址是keepalived的IP,當一臺nginx代理服務器掛掉以後也不影響訪問,而後nginx將請求轉發到kibana,kibana再去elasticsearch獲取數據,elasticsearch是兩臺作的集羣,數據會隨機保存在任意一臺elasticsearch服務器,redis服務器作數據的臨時保存,避免web服務器日誌量過大的時候形成的數據收集與保存不一致致使的日誌丟失,能夠臨時保存到redis,redis能夠是集羣,而後再由logstash服務器在非高峯時期從redis持續的取出便可,另外有一臺mysql數據庫服務器,用於持久化保存特定的數據,web服務器的日誌由filebeat收集以後發送給另外的一臺logstash,再有其寫入到redis便可完成日誌的收集,從圖中能夠看出,redis服務器處於前端結合的最中間,其左右都要依賴於redis的正常運行,web服務刪個日誌通過filebeat收集以後經過日誌轉發層的logstash寫入到redis不一樣的key當中,而後提取層logstash再從redis將數據提取並安按照不一樣的類型寫入到elasticsearch的不一樣index當中,用戶最終經過nginx代理的kibana查看到收集到的日誌的具體內容:

 

4.9.2:filebeat收集日誌轉發至logstash:

 

官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html

 

 

 

目前只收集了系統日誌,下面將tomcat的訪問日誌和啓動時生成的catalina.txt文件的日誌進行收集,另外測試多行匹配,並將輸出改成logstash進根據日誌類型判斷寫入到不一樣的redis key當中,在一個filebeat服務上面同時收集不一樣類型的日誌,好比收集系統日誌的時候還要收集tomcat的訪問日誌,那麼直接帶來的問題就是要在寫入至redis的時候要根據不一樣的日誌類型寫入到reids不通的key當中,首先經過logstash監聽一個端口,並作標準輸出測試,具體配置爲:

 

4.9.2.1:結合logstash進行輸出測試:

 

[root@linux-host1 conf.d]# cat beats.conf

 

input {

 

        beats {

 

        port => 5044

 

    }

 

}

 

 

 

#將輸出改成文件進行臨時輸出測試

 

output {

 

  file {

 

    path => "/tmp/filebeat.txt"

 

  }

 

}

 

 

 

4.9.2.2:測試語法:

 

[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/beats.conf  -t

 

 

 

4.9.2.3:啓動logstash:

 

[root@linux-host1 conf.d]# ll

 

total 8

 

-rw-r--r-- 1 root root 139 May 29 17:39 beats.conf

 

-rw-r--r-- 1 root root 319 May 29 16:16 redis-systemlog-es.conf #保留個配置,後面在會在filebeat驗證多個輸出,好比同事輸出到redis和logstash。

 

[root@linux-host1 conf.d]# systemctl  restart  logstash #重啓服務

 

4.9.2.4:更改web服務器的filebeat配置:

 

[root@linux-host2 ~]# grep -v "#"  /etc/filebeat/filebeat.yml | grep -v "^$"

 

filebeat.prospectors:

 

- input_type: log

 

  paths:

 

    - /var/log/messages

 

    - /var/log/*.log

 

  exclude_lines: ["^DBG","^$"]

 

  document_type: system-log-5612

 

output.redis:

 

  hosts: ["192.168.56.12:6379"]

 

  key: "system-log-5612" 

 

  db: 1

 

  timeout: 5

 

  password: 123456

 

output.logstash:

 

  hosts: ["192.168.56.11:5044"] #logstash 服務器地址,能夠是多個

 

  enabled: true   #是否開啓輸出至logstash,默認即爲true

 

  worker: 1  #工做線程數

 

  compression_level: 3 #壓縮級別

 

  #loadbalance: true #多個輸出的時候開啓負載

 

 

 

4.9.2.5:重啓filebeat:

 

[root@linux-host2 ~]# systemctl  restart filebeat

 

 

 

4.9.2.6:手動更改messages文件內容:

 

[root@linux-host2 filebeat]# echo "test" >> /var/log/messages

 

4.9.2.7:在logstash服務器驗證是否輸出至指定的文件:

 

 

 

4.9.2.8:在kibana驗證上一步驟配置的系統日誌:

 

能夠驗證filebeat能夠同時進行多目標的輸出。

 

 

 

 

 

4.9.3:filebeat收集多類型的日誌文件:

 

本次將tomcat的訪問日誌進行一塊兒收集,即同時收集了服務器的系統日誌和tomcat 的訪問日誌,而且定義不一樣的日誌type,最後統一轉發給logstash進行進一步處理:

 

4.9.3.1:將web服務器的filebeat更改以下:

 

[root@linux-host2 filebeat]#  grep -v "#"  /etc/filebeat/filebeat.yml | grep -v "^$"

 

filebeat.prospectors:

 

- input_type: log

 

  paths:

 

    - /var/log/messages

 

    - /var/log/*.log

 

  exclude_lines: ["^DBG","^$"]

 

  document_type: system-log-5612

 

- input_type: log

 

  paths:

 

    - /usr/local/tomcat/logs/tomcat_access_log.*.log

 

  document_type: tomcat-accesslog-5612

 

output.logstash:

 

  hosts: ["192.168.56.11:5044","192.168.56.11:5045"] #多個logstash服務器

 

  enabled: true

 

  worker: 1

 

  compression_level: 3

 

  loadbalance: true

 

4.9.3.2:重啓filebeat服務:

 

[root@linux-host2 ~]# systemctl  restart filebeat

 

4.9.3.4:更改logstash服務,增長一個beat配置:

 

[root@linux-host1 conf.d]# cp beats.conf  beats-5045.conf

 

[root@linux-host1 conf.d]# cat beats-5045.conf

 

input {

 

        beats {

 

        port => 5045 #從新開啓一個端口

 

        codec => "json"

 

        }

 

}

 

 

 

output {

 

  file {

 

    path => "/tmp/filebeat.txt"

 

  }

 

}

 

4.9.3.5:重啓logstash服務並驗證端口:

 

[root@linux-host1 conf.d]# systemctl  restart  logstash

 

 

 

4.9.3.6:訪問tomcat的web界面並修改messages文件內容:

 

[root@linux-host2 filebeat]# echo "test" >> /var/log/messages

 

[root@linux-host2 filebeat]# ab -n10 -c5 http://192.168.56.12:8080/webdir/index.html

 

4.9.3.7:驗證/tmp/filebeat.txt是否有內容:

 

 

 

4.9.3.8:將兩個beat的出改所有爲redis:

 

輸出部分的配置是同樣的,只是輸入的部分的端口一個是5044一個是5045。

 

[root@linux-host1 conf.d]# cat beats.conf

 

input {

 

        beats {

 

        port => 5044

 

        codec => "json"

 

        }

 

}

 

 

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }}

 

  if [type] == "tomcat-accesslog-5612" {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    key => "tomcat-accesslog-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }}

 

}

 

[root@linux-host1 conf.d]# cat beats-5045.conf

 

input {

 

        beats {

 

        port => 5045

 

        codec => "json"

 

        }

 

}

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }}

 

  if [type] == "tomcat-accesslog-5612" {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    key => "tomcat-accesslog-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }}

 

}

 

4.9.3.8:從新生成日誌:

 

[root@linux-host2 filebeat]# echo "test1" >> /var/log/messages

 

[root@linux-host2 filebeat]# echo "test2" >> /var/log/messages

 

[root@linux-host2 filebeat]# ab -n10 -c5 http://192.168.56.12:8080/webdir/index.html

 

4.9.3.9:驗證redis是否有日誌:

 

 

 

4.9.3.10:配置logstash從redis讀取日誌再寫入到elasticsearch:

 

[root@linux-host2 conf.d]# cat  redis-es.conf

 

input {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    key => "tomcat-accesslog-5612"

 

    data_type => "list"

 

password => "123456"

 

codec  => "json" #對於json格式的日誌定義編碼格式

 

 }

 

}

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-system-log-5612-%{+YYYY.MM.dd}"

 

}}

 

  if [type] == "tomcat-accesslog-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"

 

}}

 

}

 

4.9.3.11:驗證配置文件並重啓服務:

 

[root@linux-host2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-es.conf  -t

 

[root@linux-host2 conf.d]# systemctl   restart logstash

 

4.9.3.12:驗證redis 數據是否消失或減小:

 

 

 

4.9.3.13:在head插件驗證數據是否寫入到elasticsearch:

 

 

 

4.3.9.14:在kibana界面刪除以前建立的所有索引,再分別添加新的索引:

 

添加系統日誌索引:

 

 

 

添加tomcat訪問日誌索引:

 

 

 

4.3.9.15:kibana界面驗證tomcat訪問日誌:

 

 

 

4.3.9.16:kibana界面驗證系統日誌:

 

 

 

4.9.4:經過haproxy代理kibana:

 

  Host2已經安裝過haproxy,所以直接配置host2的haproxy便可並安裝一個kibana便可:

 

4.9.4.1:安裝配置並啓動kibana:

 

[root@linux-host2 src]# rpm -ivh kibana-5.3.0-x86_64.rpm

 

[root@linux-host2 src]# grep  "^[a-Z]" /etc/kibana/kibana.yml

 

server.port: 5601

 

server.host: "127.0.0.1"

 

elasticsearch.url: "http://192.168.56.12:9200"

 

[root@linux-host2 src]# systemctl  start kibana

 

[root@linux-host2 src]# systemctl enable  kibana

 

 

 

4.9.4.2:編輯haproxy配置文件:

 

[root@linux-host2 ~]# cat /etc/haproxy/haproxy.cfg

 

global

 

maxconn 100000

 

chroot /usr/local/haproxy

 

uid 99

 

gid 99

 

daemon

 

nbproc 1

 

pidfile /usr/local/haproxy/run/haproxy.pid

 

log 127.0.0.1 local6 info

 

 

 

defaults

 

option http-keep-alive

 

option  forwardfor

 

maxconn 100000

 

mode http

 

timeout connect 300000ms

 

timeout client  300000ms

 

timeout server  300000ms

 

 

 

listen stats

 

 mode http

 

 bind 0.0.0.0:9999

 

 stats enable

 

 log global

 

 stats uri     /haproxy-status

 

 stats auth    haadmin:q1w2e3r4ys

 

 

 

#frontend web_port

 

frontend web_port

 

        bind 0.0.0.0:80

 

        mode http

 

        option httplog

 

        log global

 

        option  forwardfor

 

###################ACL Setting##########################

 

        acl pc          hdr_dom(host) -i www.elk.com

 

        acl mobile      hdr_dom(host) -i m.elk.com

 

        acl kibana      hdr_dom(host) -i www.kibana5612.com

 

###################USE ACL##############################

 

        use_backend     pc_host        if  pc

 

        use_backend     mobile_host    if  mobile

 

        use_backend     kibana_host    if  kibana

 

########################################################

 

 

 

backend pc_host

 

        mode    http

 

        option  httplog

 

        balance source

 

        server web1  192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1

 

 

 

backend mobile_host

 

        mode    http

 

        option  httplog

 

        balance source

 

        server web1  192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1

 

 

 

 

 

backend kibana_host

 

        mode    http

 

        option  httplog

 

        balance source

 

        server web1  127.0.0.1:5601 check inter 2000 rise 3 fall 2 weight 1

 

 

 

4.9.4.3:重啓haproxy:

 

[root@linux-host2 ~]# systemctl   reload haproxy

 

4.9.4.4:添加本地域名解析:

 

C:\Windows\System32\drivers\etc

 

192.168.56.11 www.kibana5611.com

 

192.168.56.12 www.kibana5612.com

 

 

 

4.9.4.5:瀏覽器訪問域名:

 

 

 

 

 

4.9.4.6:驗證數據:

 

 

 

 

 

4.9.5:經過nginx代理kibana 並實現登陸認證:

 

  將nginx 做爲反向代理服務器,並增長登陸用戶認證的目的,能夠有效避免其餘人員隨意訪問kibana頁面。

 

4.9.5.1:關閉上一步驟的haproxy,而後安裝nginx:

 

[root@linux-host2 src]# systemctl  disable  haproxy

 

[root@linux-host2 src]# systemctl  disable  haproxy

 

[root@linux-host2 src]# tar xf nginx-1.10.3.tar.gz

 

[root@linux-host2 nginx-1.10.3]# ./configure  --prefix=/usr/local/nginx

 

[root@linux-host2 nginx-1.10.3]# make && make install

 

4.9.5.2:準備systemctl啓動文件:

 

[root@linux-host2 nginx-1.10.3]# vim /usr/lib/systemd/system/nginx.service

 

[Unit]

 

Description=The nginx HTTP and reverse proxy server

 

After=network.target remote-fs.target nss-lookup.target

 

 

 

[Service]

 

Type=forking

 

PIDFile=/run/nginx.pid #和nginx 配置文件的保持一致

 

ExecStartPre=/usr/bin/rm -f /run/nginx.pid

 

ExecStartPre=/usr/sbin/nginx -t

 

ExecStart=/usr/sbin/nginx

 

ExecReload=/bin/kill -s HUP $MAINPID

 

KillSignal=SIGQUIT

 

TimeoutStopSec=5

 

KillMode=process

 

PrivateTmp=true

 

 

 

[Install]

 

WantedBy=multi-user.target

 

 

 

4.9.5.3:配置並啓動nginx:

 

[root@linux-host2 nginx-1.10.3]# ln -sv /usr/local/nginx/sbin/nginx  /usr/sbin/

 

[root@linux-host2 nginx-1.10.3]# useradd  www -u 2000

 

[root@linux-host2 nginx-1.10.3]# chown  www.www /usr/local/nginx/ -R

 

[root@linux-host2 nginx-1.10.3]# vim /usr/local/nginx/conf/nginx.conf

 

user  www www;

 

worker_processes  1;

 

pid        /run/nginx.pid; #更改pid文件路徑與啓動腳本必須一致

 

 

 

[root@linux-host2 nginx-1.10.3]# systemctl  start nginx

 

[root@linux-host2 nginx-1.10.3]# systemctl  enable  nginx #普通用戶可否啓動nignx?

 

Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

 

4.9.5.4:訪問nginx服務器:

 

 

 

4.9.5.5:配置nginx代理kibana:

 

[root@linux-host2 conf]# mkdir  /usr/local/nginx/conf/conf.d/

 

[root@linux-host2 conf]# vim /usr/local/nginx/conf/nginx.conf

 

include /usr/local/nginx/conf/conf.d/*.conf;

 

[root@linux-host2 conf]# vim /usr/local/nginx/conf/conf.d/kibana5612.conf

 

upstream kibana_server {

 

        server  127.0.0.1:5601 weight=1 max_fails=3  fail_timeout=60;

 

}

 

 

 

server {

 

        listen 80;

 

        server_name www.kibana5612.com;

 

        location / {

 

        proxy_pass http://kibana_server;

 

        proxy_http_version 1.1;

 

        proxy_set_header Upgrade $http_upgrade;

 

        proxy_set_header Connection 'upgrade';

 

        proxy_set_header Host $host;

 

        proxy_cache_bypass $http_upgrade;

 

        }

 

}

 

4.9.5.6:重啓nginx:

 

[root@linux-host2 conf]# chown  www.www /usr/local/nginx/ -R

 

[root@linux-host2 conf]# systemctl  restart nginx

 

4.9.5.7:驗證訪問:

 

[root@linux-host2 conf]# ab -n100 -c10 http://192.168.56.12:8080/webdir/index.html

 

 

 

 

 

4.9.5.8:實現登陸認證:

 

[root@linux-host2 conf]# yum install httpd-tools –y

 

[root@linux-host2 conf]#  htpasswd -bc  /usr/local/nginx/conf/htpasswd.users zhangjie  123456

 

Adding password for user zhangjie

 

[root@linux-host2 conf]#  htpasswd -b  /usr/local/nginx/conf/htpasswd.users zhangtao  123456

 

Adding password for user zhangtao

 

 

 

[root@linux-host2 conf]# cat /usr/local/nginx/conf/htpasswd.users

 

zhangjie:$apr1$x7K2F2rr$xq8tIKg3JcOUyOzSVuBpz1

 

zhangtao:$apr1$vBg99m3i$hV/ayYIsDTm950tonXEJ11

 

 

 

[root@linux-host2 conf]# vim /usr/local/nginx/conf/conf.d/kibana5612.conf

 

upstream kibana_server {

 

        server  127.0.0.1:5601 weight=1 max_fails=3  fail_timeout=60;

 

}

 

 

 

server {

 

        listen 80;

 

        server_name www.kibana5612.com;

 

        auth_basic "Restricted Access";

 

        auth_basic_user_file /usr/local/nginx/conf/htpasswd.users; 

 

        location / {

 

        proxy_pass http://kibana_server;

 

        proxy_http_version 1.1;

 

        proxy_set_header Upgrade $http_upgrade;

 

        proxy_set_header Connection 'upgrade';

 

        proxy_set_header Host $host;

 

        proxy_cache_bypass $http_upgrade;

 

        }

 

}

 

[root@linux-host2 conf]# chown  www.www /usr/local/nginx/ -R

 

[root@linux-host2 conf]# systemctl  reload nginx

 

 

 

4.9.5.9:驗證登陸:

 

  適應瀏覽器衝從新打開nginx監聽的域名,能夠發現須要密碼才能夠登陸

 

 

 

4.9.5.10:若是不輸入密碼沒法登陸:

 

  除非點擊取消以後提示須要認證

 

 

 

 

 

 

 

 

 

4.10:經過地圖統計客戶IP所在城市:

 

4.10.1:下載並解壓地址數據文件:

 

在logstash2版本的時候使用的是http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz,可是在logstash5版本時候更換爲了http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz,即5版本和2版本使用的是不同的地址庫文件:

 

[root@linux-host2 ~]# cd /etc/logstash/

 

[root@linux-host2 logstash]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz

 

[root@linux-host2 logstash]# gunzip  GeoLite2-City.tar.gz

 

[root@linux-host2 logstash]# tar xf GeoLite2-City.tar

 

4.10.2:配置logstash使用地址庫:

 

[root@linux-host2 logstash]# cat  conf.d/redis-es.conf

 

input {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    key => "tomcat-accesslog-5612"

 

    data_type => "list"

 

    password => "123456"

 

    codec  => "json"

 

 }

 

}

 

 

 

filter {

 

        if [type] == "tomcat-accesslog-5612"  {

 

        geoip {

 

                source => "clientip"

 

                target => "geoip"

 

                database => "/etc/logstash/GeoLite2-City_20170502/GeoLite2-City.mmdb"

 

                add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]

 

                add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]

 

        }

 

    mutate {

 

      convert => [ "[geoip][coordinates]", "float"]

 

       }

 

 }

 

}

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-system-log-5612-%{+YYYY.MM.dd}"

 

  }}

 

  if [type] == "tomcat-accesslog-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"

 

  }

 

# jdbc {

 

#   connection_string => "jdbc:mysql://192.168.56.11/elk?user=elk&password=123456&useUnicode=true&characterEncoding=UTF8"

 

#   statement => ["INSERT INTO elklog(host,clientip,status,AgentVersion) VALUES(?,?,?,?)", "host","clientip","status","AgentVersion"]

 

#  }

 

}

 

}

 

 

 

4.10.3:重啓logstash服務並寫入日誌數據:

 

[root@linux-host2 logstash]# systemctl  restart logstash

 

[root@linux-host2 logs]# cat tets.log  >> tomcat_access_log.2017-05-30.log

 

4.10.4:驗證kibana界面是否能夠看到地圖數據:

 

 

 

 

 

 

 

 

 

4.11:日誌寫入數據庫:

 

寫入數據庫的目的是用於持久化保存重要數據,好比狀態碼、客戶端IP、客戶端瀏覽器版本等等,用於後期按月作數據統計等。

 

4.11.1:安裝MySQL數據庫:

 

[root@linux-host1 src]# tar xvf mysql-5.6.34-onekey-install.tar.gz

 

[root@linux-host1 src]# ./mysql-install.sh

 

[root@linux-host1 src]# /usr/local/mysql/bin/mysql_secure_installation

 

4.11.2:受權用戶登陸:

 

[root@linux-host1 src]# ln -s /var/lib/mysql/mysql.sock  /tmp/mysql.sock

 

mysql> create database elk  character set utf8 collate utf8_bin;

 

Query OK, 1 row affected (0.00 sec)

 

 

 

mysql>  grant all privileges on elk.* to elk@"%" identified by '123456';

 

Query OK, 0 rows affected (0.00 sec)

 

 

 

mysql> flush  privileges;

 

Query OK, 0 rows affected (0.00 sec)

 

4.11.3:測試用戶能夠遠程登陸:

 

 

 

4.11.4:logstash配置mysql-connector-java包:

 

  MySQL Connector/J是MySQL官方JDBC驅動程序,JDBC(Java Data Base Connectivity,java數據庫鏈接)是一種用於執行SQL語句的Java API,能夠爲多種關係數據庫提供統一訪問,它由一組用Java語言編寫的類和接口組成。

 

官方下載地址:https://dev.mysql.com/downloads/connector/

 

[root@linux-host1 src]# mkdir -pv  /usr/share/logstash/vendor/jar/jdbc

 

[root@linux-host1 src]# cp mysql-connector-java-5.1.42-bin.jar  /usr/share/logstash/vendor/jar/jdbc/

 

[root@linux-host1 src]# chown  logstash.logstash /usr/share/logstash/vendor/jar/  -R

 

4.11.5:更改gem源:

 

國外的gem源因爲網絡緣由,從國內訪問太慢並且不穩定,還常常安裝不成功,所以以前一段時間不少人都是使用國內淘寶的gem源https://ruby.taobao.org/,如今淘寶的gem源雖然還可使用已經中止維護更新,其官方介紹推薦使用https://gems.ruby-china.org

 

 

 

[root@linux-host1 src]#  yum install gem

 

[root@linux-host1 src]# gem sources --add https://gems.ruby-china.org/ --remove https://rubygems.org/

 

https://ruby.taobao.org/ added to sources

 

https://rubygems.org/ removed from sources

 

 

 

[root@linux-host1 src]# gem source list

 

*** CURRENT SOURCES ***

 

 

 

https://gems.ruby-china.org/

 

4.11.6:安裝配置插件:

 

[root@linux-host1 src]# /usr/share/logstash/bin/logstash-plugin  list #當前已經安裝的全部插件

 

 

 

[root@linux-host1 src]# /usr/share/logstash/bin/logstash-plugin   install  logstash-output-jdbc

 

 

 

4.11.7:鏈接數據庫建立表:

 

time的默認值設置爲CURRENT_TIMESTAMP

 

 

 

4.11.8:保存表:

 

 

 

4.11.9:配置logstash將日誌寫入數據庫:

 

[root@linux-host2 ~]# cat /etc/logstash/conf.d/redis-es.conf

 

input {

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "1"

 

    key => "system-log-5612"

 

    data_type => "list"

 

    password => "123456"

 

 }

 

  redis {

 

    host => "192.168.56.12"

 

    port => "6379"

 

    db => "0"

 

    key => "tomcat-accesslog-5612"

 

    data_type => "list"

 

    password => "123456"

 

    codec  => "json"

 

 }

 

}

 

 

 

output {

 

  if [type] == "system-log-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-system-log-5612-%{+YYYY.MM.dd}"

 

  }}

 

  if [type] == "tomcat-accesslog-5612" {

 

    elasticsearch {

 

      hosts => ["192.168.56.12:9200"]

 

      index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"

 

  }

 

 jdbc {

 

   connection_string => "jdbc:mysql://192.168.56.11/elk?user=elk&password=123456&useUnicode=true&characterEncoding=UTF8"

 

   statement => ["INSERT INTO elklog(host,clientip,status,AgentVersion) VALUES(?,?,?,?)", "host","clientip","status","AgentVersion"]

 

  }}

 

}

相關文章
相關標籤/搜索