前言html
通俗來說,ELK是由Elasticsearch、Logstash、Kibana 三個開源軟件的組成的一個組合體,這三個軟件當中,每一個軟件用於完成不一樣的功能,ELK 又稱爲ELK stack,官方域名爲stactic.co,ELK stack的主要優勢有以下幾個:前端
處理方式靈活: elasticsearch是實時全文索引,具備強大的搜索功能java
配置相對簡單:elasticsearch所有使用JSON 接口,logstash使用模塊配置,kibana的配置文件部分更簡單。node
檢索性能高效:基於優秀的設計,雖然每次查詢都是實時,可是也能夠達到百億級數據的查詢秒級響應。python
集羣線性擴展:elasticsearch和logstash均可以靈活線性擴展mysql
前端操做絢麗:kibana的前端設計比較絢麗,並且操做簡單linux
是一個高度可擴展的開源全文搜索和分析引擎,它可實現數據的實時全文搜索搜索、支持分佈式可實現高可用、提供API接口,能夠處理大規模日誌數據,好比Nginx、Tomcat、系統日誌等功能。nginx
能夠經過插件實現日誌收集和轉發,支持日誌過濾,支持普通log、自定義json格式的日誌解析。c++
主要是經過接口調用elasticsearch的數據,並進行前端數據可視化的展示。git
最小化安裝 Centos 7.2 x86_64操做系統的虛擬機,vcpu 2,內存4G或更多,操做系統盤50G,主機名設置規則爲linux-hostX.exmaple.com,其中host1和host2爲elasticsearch服務器,爲保證效果特額外添加一塊單獨的數據磁盤大小爲50G並格式化掛載到/data。
[root@localhost ~]# hostnamectl set-hostname linux-hostx.exmaple.com && reboot #各服務器配置本身的主機名並重啓
[root@localhost ~]# hostnamectl set-hostname linux-host2.exmaple.com && reboot
[root@linux-host1 ~]# mkdir /elk
[root@linux-host1 ~]# mount /dev/sdb /elk/
[root@linux-host1 ~]# echo " /dev/sdb /elk/ xfs defaults 0 0" >> /etc/fstab
hostX 。。。。。
關閉防全部服務器的火牆和selinux,包括web服務器、redis和logstash服務器的防火牆和selinux所有關閉,此步驟是爲了不出現由於防火牆策略或selinux安全權限引發的各類未知問題,如下只顯示了host1和host2的命令,可是其餘服務器都要執行。
[root@linux-host1 ~]# systemctl disable firewalld
[root@linux-host1 ~]# systemctl disable NetworkManager
[root@linux-host1 ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
[root@linux-host1 ~]# echo "* soft nofile 65536" >> /etc/security/limits.conf
[root@linux-host1 ~]# echo "* hard nofile 65536" >> /etc/security/limits.conf
hostX 。。。。。。
[root@linux-host1 ~]# vim /etc/hosts
192.168.56.11 linux-host1.exmaple.com
192.168.56.12 linux-host2.exmaple.com
192.168.56.13 linux-host3.exmaple.com
192.168.56.14 linux-host4.exmaple.com
192.168.56.15 linux-host5.exmaple.com
192.168.56.16 linux-host6.exmaple.com
[root@linux-host1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@linux-host1 ~]# yum install -y net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
[root@linux-host1 ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@linux-host1 ~]# echo "*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w" >> /var/spool/cron/root
[root@linux-host1 ~]# systemctl restart crond
[root@linux-host1 ~]# reboot #重啓檢查各項配置是否生效,沒有問題的話給虛擬機作快照以方便後期還原
由於elasticsearch服務運行須要java環境,所以兩臺elasticsearch服務器須要安裝java環境,可使用如下方式安裝:
方式一:直接使用yum安裝openjdk
[root@linux-host1 ~]# yum install java-1.8.0*
方式二:本地安裝在oracle官網下載rpm安裝包:
[root@linux-host1 ~]# yum localinstall jdk-8u92-linux-x64.rpm
方式三:下載二進制包自定義profile環境變量:
下載地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[root@linux-host1 ~]# tar xvf jdk-8u121-linux-x64.tar.gz -C /usr/local/
[root@linux-host1 ~]# ln -sv /usr/local/jdk1.8.0_121 /usr/local/jdk
[root@linux-host1 ~]# vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
[root@linux-host1 ~]# source /etc/profile
[root@linux-host1 ~]# java -version
java version "1.8.0_121" #確承認以出現當前的java版本號
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
下載地址:https://www.elastic.co/downloads/elasticsearch,當前最新版本5.3.0
[root@linux-host1 ~]# yum –y localinstall elasticsearch-5.3.0.rpm
[root@linux-host1 ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK-Cluster #ELK的集羣名稱,名稱相同即屬因而同一個集羣
node.name: elk-node1 #本機在集羣內的節點名稱
path.data: /elk/data #數據保存目錄
path.logs: /elk/logs #日誌保存目
bootstrap.memory_lock: true #服務啓動的時候鎖定足夠的內存,防止數據寫入swap
network.host: 0.0.0.0 #監聽IP
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12"]
[root@linux-host1 ~]# vim /usr/lib/systemd/system/elasticsearch.service #修改內存限制
LimitMEMLOCK=infinity #去掉註釋
[root@linux-host1 ~]# vim /etc/elasticsearch/jvm.options
22 -Xms2g
23 -Xmx2g #最小和最大內存限制,爲何最小和最大設置同樣大?
https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
#官方配置文檔最大建議30G之內。
#將以上配置文件scp到host2並修改本身的node名稱
[root@linux-host1~]#scp /etc/elasticsearch/elasticsearch.yml 192.168.56.12:/etc/elasticsearch/
[root@linux-host2 ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK-Cluster
node.name: elk-node2 #與host1不能相同
path.data: /data/elk
path.logs: /data/elk
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12"]
各服務器建立數據和日誌目錄並修改目錄權限爲elasticsearch:
[root@linux-host1 ~]# mkdir /elk/{data,logs}
[root@linux-host1 ~]# ll /elk/
total 0
drwxr-xr-x 2 root root 6 Apr 18 18:44 data
drwxr-xr-x 2 root root 6 Apr 18 18:44 logs
[root@linux-host1 ~]# chown elasticsearch.elasticsearch /elk/ -R
[root@linux-host1 ~]# ll /elk/
total 0
drwxr-xr-x 2 elasticsearch elasticsearch 6 Apr 18 18:44 data
drwxr-xr-x 2 elasticsearch elasticsearch 6 Apr 18 18:44 logs
[root@linux-host1 ~]# systemctl restart elasticsearch
[root@linux-host1 ~]# tail -f /elk/logs/ELK-Cluster.log
[root@linux-host1 ~]# tail -f /elk/logs/
插件是爲了完成不一樣的功能,官方提供了一些插件但大部分是收費的,另外也有一些開發愛好者提供的插件,能夠實現對elasticsearch集羣的狀態監控與管理配置等功能。
在elasticsearch 5.x版本之後再也不支持直接安裝head插件,而是須要經過啓動一個服務方式,git地址:https://github.com/mobz/elasticsearch-head
[root@linux-host1 ~]# yum install -y npm
# NPM的全稱是Node Package Manager,是隨同NodeJS一塊兒安裝的包管理和分發工具,它很方便讓JavaScript開發者下載、安裝、上傳以及管理已經安裝的包。
[root@linux-host1 ~]# cd /usr/local/src/
[root@linux-host1 src]#git clone git://github.com/mobz/elasticsearch-head.git
[root@linux-host1 src]# cd elasticsearch-head/
[root@linux-host1 elasticsearch-head]# yum install npm -y
[root@linux-host1 elasticsearch-head]# npm install grunt -save
[root@linux-host2 elasticsearch-head]# ll node_modules/grunt #確認生成文件
[root@linux-host1 elasticsearch-head]# npm install #執行安裝
[root@linux-host1 elasticsearch-head]# npm run start & #後臺啓動服務
開啓跨域訪問支持,而後重啓elasticsearch服務:
[root@linux-host1 ~]# vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true #最下方添加
http.cors.allow-origin: "*"
[root@linux-host1 ~]# /etc/init.d/elasticsearch restart
[root@linux-host1 ~]# yum install docker -y
[root@linux-host1 ~]# systemctl start docker && systemctl enable docker
[root@linux-host1 ~]# docker run -d -p 9100:9100 mobz/elasticsearch-head:5
而後從新鏈接:
Master的職責:
統計各node節點狀態信息、集羣狀態信息統計、索引的建立和刪除、索引分配的管理、關閉node節點等
Slave的職責:
同步數據、等待機會成爲Master
[root@linux-host2 ~]# docker save docker.io/mobz/elasticsearch-head > /opt/elasticsearch-head-docker.tar.gz #導出鏡像
[root@linux-host1 src]# docker load < /opt/elasticsearch-head-docker.tar.gz #導入
[root@linux-host1 src]# docker images#驗證
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/mobz/elasticsearch-head 5 b19a5c98e43b 4 months ago 823.9 MB
[root@linux-host1 src]# docker run -d -p 9100:9100 --name elastic docker.io/mobz/elasticsearch-head:5 #從本地docker images 啓動容器
Git地址爲https://github.com/lmenezes/elasticsearch-kopf,可是目前還不支持5.x版本的elasticsearch,可是能夠安裝在elasticsearc 1.x或2.x的版本安裝。
#curl –sXGET http://192.168.56.11:9200/_cluster/health?pretty=true
#獲取到的是一個json格式的返回值,那就能夠經過python對其中的信息進行分析,例如對status進行分析,若是等於green(綠色)就是運行在正常,等於yellow(黃色)表示副本分片丟失,red(紅色)表示主分片丟失
[root@linux-host1 ~]# cat els-cluster-monitor.py
#!/usr/bin/env python
#coding:utf-8
#Author Zhang Jie
import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body = ""
false="false"
obj = subprocess.Popen(("curl -sXGET http://192.168.56.11:9200/_cluster/health?pretty=true"),shell=True, stdout=subprocess.PIPE)
data = obj.stdout.read()
data1 = eval(data)
status = data1.get("status")
if status == "green":
print "50"
else:
print "100"
[root@linux-host1 ~]# python els-cluster-monitor.py
50
Logstash是一個開源的數據收集引擎,能夠水平伸縮,並且logstash整個ELK當中擁有最多插件的一個組件,其能夠接收來自不一樣來源的數據並統一輸出到指定的且能夠是多個不一樣目的地。
關閉防火牆和selinux,而且安裝java環境
[root@linux-host3 ~]# systemctl stop firewalld
[root@linux-host3 ~]# systemctl disable firewalld
[root@linux-host3 ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
[root@linux-host3 ~]# yum install jdk-8u121-linux-x64.rpm
[root@linux-host3 ~]# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@linux-host3 ~]# reboot
[root@linux-host3 ~]# yum install logstash-5.3.0.rpm
[root@linux-host3 ~]# chown logstash.logstash /usr/share/logstash/data/queue –R #權限更改成logstash用戶和組,不然啓動的時候日誌報錯
[root@linux-host3 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug }}' #標準輸入和輸出
hello
{
"@timestamp" => 2017-04-20T02:30:01.600Z, #當前事件的發生時間,
"@version" => "1", #事件版本號,一個事件就是一個ruby對象
"host" => "linux-host3.exmaple.com", #標記事件發生在哪裏
"message" => "hello" #消息的具體內容
}
[root@linux-host3 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { file { path => "/tmp/log-%{+YYYY.MM.dd}messages.gz"}}'
hello
11:01:15.229 [[main]>worker1] INFO logstash.outputs.file - Opening file {:path=>"/tmp/log-2017-04-20messages.gz"}
[root@linux-host3 ~]# tail /tmp/log-2017-04-20messages.gz #打開文件驗證
[root@linux-host3 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch {hosts => ["192.168.56.11:9200"] index => "mytest-%{+YYYY.MM.dd}" }}'
[root@linux-host1 ~]# ll /elk/data/nodes/0/indices/
total 0
drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 19 19:08 JbnPSBGxQ_WbxT8jF5-TLw
drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 19 20:18 kZk1UbsjTliYfooevuQVdQ
drwxr-xr-x 4 elasticsearch elasticsearch 27 Apr 19 19:24 m6EiWqngS0C1bspg8JtmBg
drwxr-xr-x 8 elasticsearch elasticsearch 59 Apr 20 08:49 YhtJ1dEXSOa0YEKhe6HW8w
收集日誌不能有type參數
Kibana是一個經過調用elasticsearch服務器進行圖形化展現搜索結果的開源項目。
能夠經過rpm包或者二進制的方式進行安裝
[root@linux-host1 ~]# yum localinstall kibana-5.3.0-x86_64.rpm
[root@linux-host1 ~]# grep -n "^[a-Z]" /etc/kibana/kibana.yml
2:server.port: 5601 #監聽端口
7:server.host: "0.0.0.0" #監聽地址
21:elasticsearch.url: http://192.168.56.11:9200 #elasticsearch服務器地址
[root@linux-host1 ~]# systemctl start kibana
[root@linux-host1 ~]# systemctl enable kibana
[root@linux-host1 ~]# ss -tnl | grep 5601
http://192.168.56.11:5601/status
若是默認沒有顯示柱狀的圖,多是最近沒有寫入新的數據,能夠查看較長日期當中的數據或者經過logstash新寫入數據便可:
前提須要logstash用戶對被收集的日誌文件有讀的權限並對寫入的文件有寫權限。
[root@linux-host3 ~]# cat /etc/logstash/conf.d/system-log.conf
input {
file {
type => "messagelog"
path => "/var/log/messages"
start_position => "beginning" #第一次從頭收集,以後重新添加的日誌收集
}
}
output {
file {
path => "/tmp/%{type}.%{+yyyy.MM.dd}"
}
}
[root@linux-host2 ~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf –t
[root@linux-host3 ~]# echo "test" >> /var/log/messages
[root@linux-host3 ~]# tail /tmp/messagelog.2017.04.20 #驗證是否生成文件
{"path":"/var/log/messages","@timestamp":"2017-04-20T07:12:16.001Z","@version":"1","host":"linux-host3.exmaple.com","message":"test","type":"messagelog"}
[root@linux-host2 ~]# chmod 644 /var/log/messages
[root@linux-host3 logstash]# cat /etc/logstash/conf.d/system-log.conf
input {
file {
path => "/var/log/messages" #日誌路徑
type => "systemlog" #事件的惟一類型
start_position => "beginning" #第一次收集日誌的位置
stat_interval => "3" #日誌收集的間隔時間
}
file {
path => "/var/log/secure"
type => "securelog"
start_position => "beginning"
stat_interval => "3"
}
}
output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "system-log-%{+YYYY.MM.dd}"
}}
if [type] == "securelog" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "secury-log-%{+YYYY.MM.dd}"
}}
}
[root@linux-host3 ~]# chmod 644 /var/log/secure
[root@linux-host3 ~]# chmod 644 /var/log/messages
[root@linux-host3 logstash]# systemctl restart logstash
[root@linux-host3 logstash]# echo "test" >> /var/log/secure
[root@linux-host3 logstash]# echo "test" >> /var/log/messages
收集Tomcat服務器的訪問日誌以及Tomcat錯誤日誌進行實時統計,在kibana頁面進行搜索展示,每臺Tomcat服務器要安裝logstash負責收集日誌,而後將日誌轉發給elasticsearch進行分析,在經過kibana在前端展示,配置過程以下:
須要安裝java環境,並自定義一個web界面進行測試。
[root@linux-host6 ~]# yum install jdk-8u121-linux-x64.rpm
[root@linux-host6 ~]# cd /usr/local/src/
[root@linux-host6 src]# tar xvf apache-tomcat-8.0.38.tar.gz
[root@linux-host6 src]# ln -sv /usr/local/src/apache-tomcat-8.0.38 /usr/local/tomcat
‘/usr/local/tomcat’ -> ‘/usr/local/src/apache-tomcat-8.0.38’
[root@linux-host6 tomcat]# cd /usr/local/tomcat/webapps/
[root@linux-host6 webapps]#mkdir /usr/local/tomcat/webapps/webdir
[root@linux-host6 webapps]# echo "Tomcat Page" > /usr/local/tomcat/webapps/webdir/index.html
[root@linux-host6 webapps]# ../bin/catalina.sh start
[root@linux-host6 webapps]# ss -tnl | grep 8080
LISTEN 0 100 :::8080 :::*
[root@linux-host6 tomcat]# vim conf/server.xml
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="tomcat_access_log" suffix=".log"
pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
[root@linux-host6 tomcat]# ./bin/catalina.sh stop
[root@linux-host6 tomcat]# rm -rf logs/* #刪除或清空以前的訪問日誌
[root@linux-host6 tomcat]# ./bin/catalina.sh start #啓動並訪問tomcat界面
[root@linux-host6 tomcat]# tail -f logs/localhost_access_log.2017-04-20.txt
Python 腳本解析:
#!/usr/bin/env python
#coding:utf-8
#Author Zhang Jie
data ={"clientip":"192.168.56.1","ClientUser":"-","authenticated":"-","AccessTime":"[20/May/2017:21:46:22 +0800]","method":"GET /webdir/ HTTP/1.1","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"}
ip=data["clientip"]
print ip
須要部署tomcat並安裝配置logstash
[root@linux-host6 ~]# yum install logstash-5.3.0.rpm -y
[root@linux-host6 ~]# vim /etc/logstash/conf.d/tomcat.conf
[root@linux-host6 ~]# cat /etc/logstash/conf.d/tomcat.conf
input {
file {
path => "/usr/local/tomcat/logs/localhost_access_log.*.txt"
start_position => "end"
type => "tomct-access-log"
}
file {
path => "/var/log/messages"
start_position => "end"
type => "system-log"
}
}
output {
if [type] == "tomct-access-log" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-tomcat-5616-access-%{+YYYY.MM.dd}"
codec => "json"
}}
if [type] == "system-log" {
elasticsearch {
hosts => ["192.168.56.12:9200"] #寫入到不通的ES服務器
index => "system-log-5616-%{+YYYY.MM.dd}"
}}
}
[root@linux-host6 ~]# systemctl restart logstash #更改完配置文件重啓logstash
[root@linux-host6 ~]# tail -f /var/log/logstash/logstash-plain.log #驗證日誌
[root@linux-host6 ~]# chmod 644 /var/log/messages #修改權限
[root@linux-host6 ~]# systemctl restart logstash #再次重啓logstash
[root@linux-host6 ~]# echo "2017-02-21" >> /var/log/messages
[root@linux-host3 ~]# yum install httpd-tools –y
[root@linux-host3 ~]# ab -n1000 -c100 http://192.168.56.16:8080/webdir/
使用codec的multiline插件實現多行匹配,這是一個能夠將多行進行合併的插件,並且可使用what指定將匹配到的行與前面的行合併仍是和後面的行合併,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
[root@linux-host1 ~]# chown logstash.logstash /usr/share/logstash/data/queue -R
[root@linux-host1 ~]# ll -d /usr/share/logstash/data/queue
drwxr-xr-x 2 logstash logstash 6 Apr 19 20:03 /usr/share/logstash/data/queue
[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf
input {
stdin {
codec => multiline {
pattern => "^\[" #當遇到[開頭的行時候將多行進行合併
negate => true #true爲匹配成功進行操做,false爲不成功進行操做
what => "previous" #與上面的行合併,若是是下面的行合併就是next
}}
}
filter { #日誌過濾,若是全部的日誌都過濾就寫這裏,若是隻針對某一個過濾就寫在input裏面的日誌輸入裏面
}
output {
stdout {
codec => rubydebug
}}
[root@linux-host1 ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf
[root@linux-host1 ~]# vim /etc/logstash/conf.d/java.conf
input {
file {
path => "/elk/logs/ELK-Cluster.log"
type => "javalog"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}}
}
output {
if [type] == "javalog" {
stdout {
codec => rubydebug
}
file {
path => "/tmp/m.txt"
}}
}
[root@linux-host1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t
更改後的內容以下:
[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf
input {
file {
path => "/elk/logs/ELK-Cluster.log"
type => "javalog"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}}
}
output {
if [type] == "javalog" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "javalog-5611-%{+YYYY.MM.dd}"
}}
}
[root@linux-host1 ~]# systemctl restart logstash
而後重啓一下elasticsearch服務,目前是爲了生成新的日誌,以驗證logstash可否自動收集新生成的日誌。
[root@linux-host1 ~]# systemctl restart elasticsearch
[root@linux-host1 ~]# cat /elk/logs/ELK-Cluster.log >> /tmp/1
[root@linux-host1 ~]# cat /tmp/1 >> /elk/logs/ELK-Cluster.log
[root@linux-host1~]# cat /var/lib/logstash/plugins/inputs/file/.sincedb_1ced15cfacdbb0380466be84d620085a
134219868 0 2064 29465 #記錄了收集文件的inode信息
[root@linux-host1 ~]# ll -li /elk/logs/ELK-Cluster.log
134219868 -rw-r--r-- 1 elasticsearch elasticsearch 29465 Apr 21 14:33 /elk/logs/ELK-Cluster.log
[root@linux-host6 ~]# yum install gcc gcc-c++ automake pcre pcre-devel zlip zlib-devel openssl openssl-devel
[root@linux-host6 ~]# cd /usr/local/src/
[root@linux-host6 src]# wget http://nginx.org/download/nginx-1.10.3.tar.gz
[root@linux-host6 src]# tar xvf nginx-1.10.3.tar.gz
[root@linux-host6 src]# cd nginx-1.10.3
[root@linux-host6 nginx-1.10.3]# ./configure --prefix=/usr/local/nginx-1.10.3
[root@linux-host6 nginx-1.10.3]# make && make install
[root@linux-host6 nginx-1.10.3]# ln -sv /usr/local/nginx-1.10.3 /usr/local/nginx
‘/usr/local/nginx’ -> ‘/usr/local/nginx-1.10.3’
[root@linux-host6 nginx-1.10.3]# cd /usr/local/nginx
[root@linux-host6 nginx]# vim conf/nginx.conf
48 location /web {
49 root html;
50 index index.html index.htm;
51 }
[root@linux-host6 nginx]# mkdir /usr/local/nginx/html/web
[root@linux-host6 nginx]# echo " Nginx WebPage! " > /usr/local/nginx/html/web/index.html
/usr/local/nginx/sbin/nginx -t #測試配置文件語法
/usr/local/nginx/sbin/nginx #啓動服務
/usr/local/nginx/sbin/nginx -s reload #重讀配置文件
[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx-1.10.3/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx-1.10.3/conf/nginx.conf test is successful
[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx
[root@linux-host6 nginx]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 17719 root 6u IPv4 90721 0t0 TCP *:http (LISTEN)
nginx 17720 nobody 6u IPv4 90721 0t0 TCP *:http (LISTEN)
[root@linux-host6 nginx]# vim conf/nginx.conf
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}';
access_log /var/log/nginx/access.log access_json;
[root@linux-host6 nginx]# mkdir /var/log/nginx
[root@linux-host6 nginx]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx-1.10.3/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx-1.10.3/conf/nginx.conf test is successful
[root@linux-host6 nginx]# tail /var/log/nginx/access.log
{"@timestamp":"2017-04-21T17:03:09+08:00","host":"192.168.56.16","clientip":"192.168.56.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.56.16","url":"/web/index.html","domain":"192.168.56.16","xff":"-","referer":"-","status":"304"}
[root@linux-host6 conf.d]# vim nginx.conf
input {
file {
path => "/var/log/nginx/access.log"
start_position => "end"
type => "nginx-accesslog"
codec => json
}
}
output {
if [type] == "nginx-accesslog" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-nginx-accesslog-5616-%{+YYYY.MM.dd}"
}}
}
經過logstash的tcp/udp插件收集日誌,一般用於在向elasticsearch日誌補錄丟失的部分日誌,能夠將丟失的日誌經過一個TCP端口直接寫入到elasticsearch服務器。
[root@linux-host6 ~]# cat /etc/logstash/conf.d/tcp.conf
input {
tcp {
port => 9889
type => "tcplog"
mode => "server"
}
}
output {
stdout {
codec => rubydebug
}
}
[root@linux-host6 src]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf
NetCat簡稱nc,在網絡工具中有「瑞士軍刀」美譽,其功能實用,是一個簡單、可靠的網絡工具,可經過TCP或UDP協議傳輸讀寫數據,另外還具備不少其餘功能。
[root@linux-host1 ~]# yum instll nc –y
[root@linux-host1 ~]# echo "nc test" | nc 192.168.56.16 9889
[root@linux-host1 ~]# nc 192.168.56.16 9889 < /etc/passwd
在類Unix操做系統中,設備節點並不必定要對應物理設備。沒有這種對應關係的設備是僞設備。操做系統運用了它們提供的多種功能,tcp只是dev下面衆多僞設備當中的一種設備。
[root@linux-host1 ~]# echo "僞設備" > /dev/tcp/192.168.56.16/9889
[root@linux-host6 conf.d]# vim /etc/logstash/conf.d/tcp.conf
input {
tcp {
port => 9889
type => "tcplog"
mode => "server"
}
}
output {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-tcplog-%{+YYYY.MM.dd}"
}
}
[root@linux-host6 conf.d]# systemctl restart logstas
[root@linux-host1 ~]# echo "僞設備1" > /dev/tcp/192.168.56.16/9889
[root@linux-host1 ~]# echo "僞設備2" > /dev/tcp/192.168.56.16/9889
4.5.12:驗證數據:
在centos 6及以前的版本叫作syslog,centos 7開始叫作rsyslog,根據官方的介紹,rsyslog(2013年版本)能夠達到每秒轉發百萬條日誌的級別,官方網址:http://www.rsyslog.com/,確認系統安裝的版本命令以下:
[root@linux-host1 ~]# yum list syslog
Installed Packages rsyslog.x86_64 7.4.7-12.el7
[root@linux-host2 ~]# cd /usr/local/src/
[root@linux-host2 src]# wget http://www.haproxy.org/download/1.7/src/haproxy-1.7.5.tar.gz
[root@linux-host2 src]# tar xvf haproxy-1.7.5.tar.gz
[root@linux-host2 src]# cd haproxy-1.7.5
[root@linux-host2 src]# yum install gcc pcre pcre-devel openssl openssl-devel -y [root@linux-host2 haproxy-1.7.5]#make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy
[root@linux-host2 haproxy-1.7.5]# make install PREFIX=/usr/local/haproxy
[root@linux-host2 haproxy-1.7.5]# /usr/local/haproxy/sbin/haproxy -v #確認版本
HA-Proxy version 1.7.5 2017/04/03
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org
準備啓動腳步:
[root@linux-host2 haproxy-1.7.5]# vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
[root@linux-host2 haproxy-1.7.5]# cp /usr/local/src/haproxy-1.7.5/haproxy-systemd-wrapper /usr/sbin/
[root@linux-host2 haproxy-1.7.5]# cp /usr/local/src/haproxy-1.7.5/haproxy /usr/sbin/
[root@linux-host2 haproxy-1.7.5]# vim /etc/sysconfig/haproxy #系統級配置文件
# Add extra options to the haproxy daemon here. This can be useful for
# specifying multiple configuration files with multiple -f options.
# See haproxy(1) for a complete list of options.
OPTIONS=""
[root@linux-host2 haproxy-1.7.5]# mkdir /etc/haproxy
[root@linux-host2 haproxy-1.7.5]# cat /etc/haproxy/haproxy.cfg
global
maxconn 100000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/run/haproxy.pid
log 127.0.0.1 local6 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:123456
#frontend web_port
frontend web_port
bind 0.0.0.0:80
mode http
option httplog
log global
option forwardfor
###################ACL Setting##########################
acl pc hdr_dom(host) -i www.elk.com
acl mobile hdr_dom(host) -i m.elk.com
###################USE ACL##############################
use_backend pc_host if pc
use_backend mobile_host if mobile
########################################################
backend pc_host
mode http
option httplog
balance source
server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
backend mobile_host
mode http
option httplog
balance source
server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514 #去掉15/16/19/20行前面的註釋
local6.* @@192.168.56.11:5160 #最後面一行添加,local6對應haproxy配置文件定義的local級別
[root@linux-host2 ~]# systemctl enable haproxy
[root@linux-host2 ~]# systemctl restart haproxy
[root@linux-host2 ~]# systemctl restart rsyslog
確認服務進程已經存在:
C:\Windows\System32\drivers\etc
192.168.56.12 www.elk.com
192.168.56.12 m.elk.com
啓動後端web服務器的nginx:
[root@linux-host1 ~]# /usr/local/nginx/sbin/nginx
確承認以訪問到nginx的web界面:
配置logstash監聽一個本地端口做爲日誌輸入源,haproxy服務器的rsyslog輸出IP和端口要等同於logstash服務器監聽的IP:端口,本次的配置是在Host1上開啓logstash,在Host2上收集haproxy的訪問日誌並轉發至Host1服務器的logstash進行處理,logstash的配置文件以下:
[root@linux-host1 conf.d]# cat /etc/logstash/conf.d/rsyslog.conf
input{
syslog {
type => "system-rsyslog-haproxy5612"
port => "5160" #監聽一個本地的端口
}}
output{
stdout{
codec => rubydebug
}}
[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslog.conf
添加本地解析:
[root@linux-host1 ~]# tail –n2 /etc/hosts
192.168.56.12 www.elk.com
192.168.56.12 m.elk.com
[root@linux-host1 ~]# curl http://www.elk.com/nginxweb/index.html
[root@linux-host1 conf.d]# cat /etc/logstash/conf.d/rsyslog.conf
input{
syslog {
type => "ststem-rsyslog"
port => "516"
}}
output{
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-rsyslog-%{+YYYY.MM.dd}"
}
}
[root@linux-host6 conf.d]# systemctl restart logstash
訪問head插件以確認生成index:
用一臺服務器按照部署redis服務,專門用於日誌緩存使用,用於web服務器產生大量日誌的場景,例以下面的服務器內存即將被使用完畢,查看是由於redis服務保存了大量的數據沒有被讀取而佔用了大量的內存空間。
總體架構:
[root@linux-host2 ~]# cd /usr/local/src/
[root@linux-host2 src]#
[root@linux-host2 src]# tar xvf redis-3.2.8.tar.gz
[root@linux-host2 src]# ln -sv /usr/local/src/redis-3.2.8 /usr/local/redis
‘/usr/local/redis’ -> ‘/usr/local/src/redis-3.2.8’
[root@linux-host2 src]#cd /usr/local/redis/deps
[root@linux-host2 redis]# yum install gcc
[root@linux-host2 deps]# make geohash-int hiredis jemalloc linenoise lua
[root@linux-host2 deps]# cd ..
[root@linux-host2 redis]# make
[root@linux-host2 redis]# vim redis.conf
[root@linux-host2 redis]# grep "^[a-Z]" redis.conf #主要改動的地方
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
save ""
rdbcompression no #是否壓縮
rdbchecksum no #是否校驗
[root@linux-host2 redis]# ln -sv /usr/local/redis/src/redis-server /usr/bin/
‘/usr/bin/redis-server’ -> ‘/usr/local/redis/src/redis-server’
[root@linux-host2 redis]# ln -sv /usr/local/redis/src/redis-cli /usr/bin/
‘/usr/bin/redis-cli’ -> ‘/usr/local/redis/src/redis-cli’
爲安全考慮,生產環境必須設置reids鏈接密碼:
[root@linux-host2 redis]# redis-cli
127.0.0.1:6379> config set requirepass 123456 #動態設置,重啓後無效
OK
480 requirepass 123456 #redis.conf配置文件
[root@linux-host2 redis]# redis-server /usr/local/redis/redis.conf #啓動服務
[root@linux-host2 redis]# redis-cli
127.0.0.1:6379> ping
PONG
將tomcat服務器的logstash收集以後的tomcat 訪問日誌寫入到redis服務器,而後經過另外的logstash將redis服務器的數據取出在寫入到elasticsearch服務器。
官方文檔:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html
[root@linux-host2 tomcat]# cat /etc/logstash/conf.d/tomcat_tcp.conf
input {
file {
path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"
type => "tomcat-accesslog-5612"
start_position => "beginning"
stat_interval => "2"
}
tcp {
port => 7800
mode => "server"
type => "tcplog-5612"
}
}
output {
if [type] == "tomcat-accesslog-5612" {
redis {
data_type => "list"
key => "tomcat-accesslog-5612"
host => "192.168.56.12"
port => "6379"
db => "0"
password => "123456"
}}
if [type] == "tcplog-5612" {
redis {
data_type => "list"
key => "tcplog-5612"
host => "192.168.56.12"
port => "6379"
db => "1"
password => "123456"
}}
}
[root@linux-host2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat.conf
[root@linux-host1 ~]# echo "僞設備1" > /dev/tcp/192.168.56.12/7800
配置專門logstash服務器從redis讀取指定的key的數據,並寫入到elasticsearch。
[root@linux-host3 ~]# cat /etc/logstash/conf.d/redis-to-els.conf
[root@linux-host1 conf.d]# cat /etc/logstash/conf.d/redis-tomcat-es.conf
input {
redis {
data_type => "list"
key => "tomcat-accesslog-5612"
host => "192.168.56.12"
port => "6379"
db => "0"
password => "123456"
codec => "json"
}
redis {
data_type => "list"
key => "tcplog-5612"
host => "192.168.56.12"
port => "6379"
db => "1"
password => "123456"
}
}
output {
if [type] == "tomcat-accesslog-5612" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-tomcat5612-accesslog-%{+YYYY.MM.dd}"
}}
if [type] == "tcplog-5612" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "logstash-tcplog5612-%{+YYYY.MM.dd}"
}}
}
[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-to-els.conf
#注:測試沒有問題以後,請將logstash使用服務的方式正常啓動
Filebeat是輕量級單用途的日誌收集工具,用於在沒有安裝java的服務器上專門收集日誌,能夠將日誌轉發到logstash、elasticsearch或redis等場景中進行下一步處理。
官網下載地址:https://www.elastic.co/downloads/beats/filebeat
官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html
先訪問web服務器,以產生必定的日誌,而後確認是json格式,由於下面的課程中會使用到:
[root@linux-host2 ~]# ab -n100 -c100 http://192.168.56.16:8080/web
[root@linux-host2 ~]# tail /usr/local/tomcat/logs/localhost_access_log.2017-04-28.txt
{"clientip":"192.168.56.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
{"clientip":"192.168.56.15","ClientUser":"-","authenticated":"-","AccessTime":"[28/Apr/2017:21:16:46 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"12","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
[root@linux-host2 ~]# systemctl stop logstash #中止logstash服務(若是有安裝)
[root@linux-host2 src]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.2-x86_64.rpm
[root@linux-host6 src]# yum install filebeat-5.3.2-x86_64.rpm -y
[root@linux-host2 ~]# cd /etc/filebeat/
[root@linux-host2 filebeat]# cp filebeat.yml filebeat.yml.bak #備份源配置文件
[root@linux-host2 ~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/log/*.log
exclude_lines: ["^DBG","^$"] #不收取的
#include_lines: ["^ERR", "^WARN"] #只收取的
f #類型,會在每條日誌中插入標記
output.file:
path: "/tmp"
filename: "filebeat.txt"
[root@linux-host2 filebeat]# systemctl start filebeat
Filebeat支持將數據直接寫入到redis服務器,本步驟爲寫入到redis當中的一個能夠,另外filebeat還支持寫入到elasticsearch、logstash等服務器。
[root@linux-host2 ~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/log/*.log
exclude_lines: ["^DBG","^$"]
document_type: system-log-5612
output.redis:
hosts: ["192.168.56.12:6379"]
key: "system-log-5612" #爲了後期日誌處理,建議自定義key名稱
db: 1 #使用第幾個庫
timeout: 5 #超時時間
password: 123456 #redis密碼
注意選擇的db是否和filebeat寫入一致
[root@linux-host1 ~]# cat /etc/logstash/conf.d/redis-systemlog-es.conf
input {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
}
}
output {
if [type] == "system-log-5612" {
elasticsearch {
hosts => ["192.168.56.11:9200"]
index => "system-log-5612"
}}
}
[root@linux-host1 ~]# systemctl restart logstash #重啓logstash服務
實際環境當中,可能會出現reids當中堆積了大量的數據而logstash因爲種種緣由未能及時提取日誌,此時會致使redis服務器的內存被大量使用,甚至出現以下內存即將被使用完畢的情景:
查看reids中的日誌隊列長度發現有大量的日誌堆積在redis 當中:
#!/usr/bin/env python
#coding:utf-8
#Author Zhang jie
import redis
def redis_conn():
pool=redis.ConnectionPool(host="192.168.56.12",port=6379,db=0,password=123456)
conn = redis.Redis(connection_pool=pool)
data = conn.llen('tomcat-accesslog-5612')
print(data)
redis_conn()
在下面的圖當中從左向右看,當要訪問ELK日誌統計平臺的時候,首先訪問的是兩臺nginx+keepalived作的負載高可用,訪問的地址是keepalived的IP,當一臺nginx代理服務器掛掉以後也不影響訪問,而後nginx將請求轉發到kibana,kibana再去elasticsearch獲取數據,elasticsearch是兩臺作的集羣,數據會隨機保存在任意一臺elasticsearch服務器,redis服務器作數據的臨時保存,避免web服務器日誌量過大的時候形成的數據收集與保存不一致致使的日誌丟失,能夠臨時保存到redis,redis能夠是集羣,而後再由logstash服務器在非高峯時期從redis持續的取出便可,另外有一臺mysql數據庫服務器,用於持久化保存特定的數據,web服務器的日誌由filebeat收集以後發送給另外的一臺logstash,再有其寫入到redis便可完成日誌的收集,從圖中能夠看出,redis服務器處於前端結合的最中間,其左右都要依賴於redis的正常運行,web服務刪個日誌通過filebeat收集以後經過日誌轉發層的logstash寫入到redis不一樣的key當中,而後提取層logstash再從redis將數據提取並安按照不一樣的類型寫入到elasticsearch的不一樣index當中,用戶最終經過nginx代理的kibana查看到收集到的日誌的具體內容:
官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html
目前只收集了系統日誌,下面將tomcat的訪問日誌和啓動時生成的catalina.txt文件的日誌進行收集,另外測試多行匹配,並將輸出改成logstash進根據日誌類型判斷寫入到不一樣的redis key當中,在一個filebeat服務上面同時收集不一樣類型的日誌,好比收集系統日誌的時候還要收集tomcat的訪問日誌,那麼直接帶來的問題就是要在寫入至redis的時候要根據不一樣的日誌類型寫入到reids不通的key當中,首先經過logstash監聽一個端口,並作標準輸出測試,具體配置爲:
[root@linux-host1 conf.d]# cat beats.conf
input {
beats {
port => 5044
}
}
#將輸出改成文件進行臨時輸出測試
output {
file {
path => "/tmp/filebeat.txt"
}
}
[root@linux-host1 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats.conf -t
[root@linux-host1 conf.d]# ll
total 8
-rw-r--r-- 1 root root 139 May 29 17:39 beats.conf
-rw-r--r-- 1 root root 319 May 29 16:16 redis-systemlog-es.conf #保留個配置,後面在會在filebeat驗證多個輸出,好比同事輸出到redis和logstash。
[root@linux-host1 conf.d]# systemctl restart logstash #重啓服務
[root@linux-host2 ~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/log/*.log
exclude_lines: ["^DBG","^$"]
document_type: system-log-5612
output.redis:
hosts: ["192.168.56.12:6379"]
key: "system-log-5612"
db: 1
timeout: 5
password: 123456
output.logstash:
hosts: ["192.168.56.11:5044"] #logstash 服務器地址,能夠是多個
enabled: true #是否開啓輸出至logstash,默認即爲true
worker: 1 #工做線程數
compression_level: 3 #壓縮級別
#loadbalance: true #多個輸出的時候開啓負載
[root@linux-host2 ~]# systemctl restart filebeat
[root@linux-host2 filebeat]# echo "test" >> /var/log/messages
能夠驗證filebeat能夠同時進行多目標的輸出。
本次將tomcat的訪問日誌進行一塊兒收集,即同時收集了服務器的系統日誌和tomcat 的訪問日誌,而且定義不一樣的日誌type,最後統一轉發給logstash進行進一步處理:
[root@linux-host2 filebeat]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/log/*.log
exclude_lines: ["^DBG","^$"]
document_type: system-log-5612
- input_type: log
paths:
- /usr/local/tomcat/logs/tomcat_access_log.*.log
document_type: tomcat-accesslog-5612
output.logstash:
hosts: ["192.168.56.11:5044","192.168.56.11:5045"] #多個logstash服務器
enabled: true
worker: 1
compression_level: 3
loadbalance: true
[root@linux-host2 ~]# systemctl restart filebeat
[root@linux-host1 conf.d]# cp beats.conf beats-5045.conf
[root@linux-host1 conf.d]# cat beats-5045.conf
input {
beats {
port => 5045 #從新開啓一個端口
codec => "json"
}
}
output {
file {
path => "/tmp/filebeat.txt"
}
}
[root@linux-host1 conf.d]# systemctl restart logstash
[root@linux-host2 filebeat]# echo "test" >> /var/log/messages
[root@linux-host2 filebeat]# ab -n10 -c5 http://192.168.56.12:8080/webdir/index.html
輸出部分的配置是同樣的,只是輸入的部分的端口一個是5044一個是5045。
[root@linux-host1 conf.d]# cat beats.conf
input {
beats {
port => 5044
codec => "json"
}
}
output {
if [type] == "system-log-5612" {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
password => "123456"
}}
if [type] == "tomcat-accesslog-5612" {
redis {
host => "192.168.56.12"
port => "6379"
db => "0"
key => "tomcat-accesslog-5612"
data_type => "list"
password => "123456"
}}
}
[root@linux-host1 conf.d]# cat beats-5045.conf
input {
beats {
port => 5045
codec => "json"
}
}
output {
if [type] == "system-log-5612" {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
password => "123456"
}}
if [type] == "tomcat-accesslog-5612" {
redis {
host => "192.168.56.12"
port => "6379"
db => "0"
key => "tomcat-accesslog-5612"
data_type => "list"
password => "123456"
}}
}
[root@linux-host2 filebeat]# echo "test1" >> /var/log/messages
[root@linux-host2 filebeat]# echo "test2" >> /var/log/messages
[root@linux-host2 filebeat]# ab -n10 -c5 http://192.168.56.12:8080/webdir/index.html
[root@linux-host2 conf.d]# cat redis-es.conf
input {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
password => "123456"
}
redis {
host => "192.168.56.12"
port => "6379"
db => "0"
key => "tomcat-accesslog-5612"
data_type => "list"
password => "123456"
codec => "json" #對於json格式的日誌定義編碼格式
}
}
output {
if [type] == "system-log-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-system-log-5612-%{+YYYY.MM.dd}"
}}
if [type] == "tomcat-accesslog-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"
}}
}
[root@linux-host2 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-es.conf -t
[root@linux-host2 conf.d]# systemctl restart logstash
4.9.3.13:在head插件驗證數據是否寫入到elasticsearch:
添加系統日誌索引:
添加tomcat訪問日誌索引:
Host2已經安裝過haproxy,所以直接配置host2的haproxy便可並安裝一個kibana便可:
[root@linux-host2 src]# rpm -ivh kibana-5.3.0-x86_64.rpm
[root@linux-host2 src]# grep "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://192.168.56.12:9200"
[root@linux-host2 src]# systemctl start kibana
[root@linux-host2 src]# systemctl enable kibana
[root@linux-host2 ~]# cat /etc/haproxy/haproxy.cfg
global
maxconn 100000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/run/haproxy.pid
log 127.0.0.1 local6 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:q1w2e3r4ys
#frontend web_port
frontend web_port
bind 0.0.0.0:80
mode http
option httplog
log global
option forwardfor
###################ACL Setting##########################
acl pc hdr_dom(host) -i www.elk.com
acl mobile hdr_dom(host) -i m.elk.com
acl kibana hdr_dom(host) -i www.kibana5612.com
###################USE ACL##############################
use_backend pc_host if pc
use_backend mobile_host if mobile
use_backend kibana_host if kibana
########################################################
backend pc_host
mode http
option httplog
balance source
server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
backend mobile_host
mode http
option httplog
balance source
server web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
backend kibana_host
mode http
option httplog
balance source
server web1 127.0.0.1:5601 check inter 2000 rise 3 fall 2 weight 1
[root@linux-host2 ~]# systemctl reload haproxy
C:\Windows\System32\drivers\etc
192.168.56.11 www.kibana5611.com
192.168.56.12 www.kibana5612.com
將nginx 做爲反向代理服務器,並增長登陸用戶認證的目的,能夠有效避免其餘人員隨意訪問kibana頁面。
[root@linux-host2 src]# systemctl disable haproxy
[root@linux-host2 src]# systemctl disable haproxy
[root@linux-host2 src]# tar xf nginx-1.10.3.tar.gz
[root@linux-host2 nginx-1.10.3]# ./configure --prefix=/usr/local/nginx
[root@linux-host2 nginx-1.10.3]# make && make install
[root@linux-host2 nginx-1.10.3]# vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid #和nginx 配置文件的保持一致
ExecStartPre=/usr/bin/rm -f /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
KillSignal=SIGQUIT
TimeoutStopSec=5
KillMode=process
PrivateTmp=true
[Install]
WantedBy=multi-user.target
[root@linux-host2 nginx-1.10.3]# ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/
[root@linux-host2 nginx-1.10.3]# useradd www -u 2000
[root@linux-host2 nginx-1.10.3]# chown www.www /usr/local/nginx/ -R
[root@linux-host2 nginx-1.10.3]# vim /usr/local/nginx/conf/nginx.conf
user www www;
worker_processes 1;
pid /run/nginx.pid; #更改pid文件路徑與啓動腳本必須一致
[root@linux-host2 nginx-1.10.3]# systemctl start nginx
[root@linux-host2 nginx-1.10.3]# systemctl enable nginx #普通用戶可否啓動nignx?
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@linux-host2 conf]# mkdir /usr/local/nginx/conf/conf.d/
[root@linux-host2 conf]# vim /usr/local/nginx/conf/nginx.conf
include /usr/local/nginx/conf/conf.d/*.conf;
[root@linux-host2 conf]# vim /usr/local/nginx/conf/conf.d/kibana5612.conf
upstream kibana_server {
server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
}
server {
listen 80;
server_name www.kibana5612.com;
location / {
proxy_pass http://kibana_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
[root@linux-host2 conf]# chown www.www /usr/local/nginx/ -R
[root@linux-host2 conf]# systemctl restart nginx
[root@linux-host2 conf]# ab -n100 -c10 http://192.168.56.12:8080/webdir/index.html
[root@linux-host2 conf]# yum install httpd-tools –y
[root@linux-host2 conf]# htpasswd -bc /usr/local/nginx/conf/htpasswd.users zhangjie 123456
Adding password for user zhangjie
[root@linux-host2 conf]# htpasswd -b /usr/local/nginx/conf/htpasswd.users zhangtao 123456
Adding password for user zhangtao
[root@linux-host2 conf]# cat /usr/local/nginx/conf/htpasswd.users
zhangjie:$apr1$x7K2F2rr$xq8tIKg3JcOUyOzSVuBpz1
zhangtao:$apr1$vBg99m3i$hV/ayYIsDTm950tonXEJ11
[root@linux-host2 conf]# vim /usr/local/nginx/conf/conf.d/kibana5612.conf
upstream kibana_server {
server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
}
server {
listen 80;
server_name www.kibana5612.com;
auth_basic "Restricted Access";
auth_basic_user_file /usr/local/nginx/conf/htpasswd.users;
location / {
proxy_pass http://kibana_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
[root@linux-host2 conf]# chown www.www /usr/local/nginx/ -R
[root@linux-host2 conf]# systemctl reload nginx
適應瀏覽器衝從新打開nginx監聽的域名,能夠發現須要密碼才能夠登陸
除非點擊取消以後提示須要認證
在logstash2版本的時候使用的是http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz,可是在logstash5版本時候更換爲了http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz,即5版本和2版本使用的是不同的地址庫文件:
[root@linux-host2 ~]# cd /etc/logstash/
[root@linux-host2 logstash]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
[root@linux-host2 logstash]# gunzip GeoLite2-City.tar.gz
[root@linux-host2 logstash]# tar xf GeoLite2-City.tar
[root@linux-host2 logstash]# cat conf.d/redis-es.conf
input {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
password => "123456"
}
redis {
host => "192.168.56.12"
port => "6379"
db => "0"
key => "tomcat-accesslog-5612"
data_type => "list"
password => "123456"
codec => "json"
}
}
filter {
if [type] == "tomcat-accesslog-5612" {
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/GeoLite2-City_20170502/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}
output {
if [type] == "system-log-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-system-log-5612-%{+YYYY.MM.dd}"
}}
if [type] == "tomcat-accesslog-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"
}
# jdbc {
# connection_string => "jdbc:mysql://192.168.56.11/elk?user=elk&password=123456&useUnicode=true&characterEncoding=UTF8"
# statement => ["INSERT INTO elklog(host,clientip,status,AgentVersion) VALUES(?,?,?,?)", "host","clientip","status","AgentVersion"]
# }
}
}
[root@linux-host2 logstash]# systemctl restart logstash
[root@linux-host2 logs]# cat tets.log >> tomcat_access_log.2017-05-30.log
寫入數據庫的目的是用於持久化保存重要數據,好比狀態碼、客戶端IP、客戶端瀏覽器版本等等,用於後期按月作數據統計等。
[root@linux-host1 src]# tar xvf mysql-5.6.34-onekey-install.tar.gz
[root@linux-host1 src]# ./mysql-install.sh
[root@linux-host1 src]# /usr/local/mysql/bin/mysql_secure_installation
[root@linux-host1 src]# ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock
mysql> create database elk character set utf8 collate utf8_bin;
Query OK, 1 row affected (0.00 sec)
mysql> grant all privileges on elk.* to elk@"%" identified by '123456';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
4.11.4:logstash配置mysql-connector-java包:
MySQL Connector/J是MySQL官方JDBC驅動程序,JDBC(Java Data Base Connectivity,java數據庫鏈接)是一種用於執行SQL語句的Java API,能夠爲多種關係數據庫提供統一訪問,它由一組用Java語言編寫的類和接口組成。
官方下載地址:https://dev.mysql.com/downloads/connector/
[root@linux-host1 src]# mkdir -pv /usr/share/logstash/vendor/jar/jdbc
[root@linux-host1 src]# cp mysql-connector-java-5.1.42-bin.jar /usr/share/logstash/vendor/jar/jdbc/
[root@linux-host1 src]# chown logstash.logstash /usr/share/logstash/vendor/jar/ -R
國外的gem源因爲網絡緣由,從國內訪問太慢並且不穩定,還常常安裝不成功,所以以前一段時間不少人都是使用國內淘寶的gem源https://ruby.taobao.org/,如今淘寶的gem源雖然還可使用已經中止維護更新,其官方介紹推薦使用https://gems.ruby-china.org。
[root@linux-host1 src]# yum install gem
[root@linux-host1 src]# gem sources --add https://gems.ruby-china.org/ --remove https://rubygems.org/
https://ruby.taobao.org/ added to sources
https://rubygems.org/ removed from sources
[root@linux-host1 src]# gem source list
*** CURRENT SOURCES ***
https://gems.ruby-china.org/
[root@linux-host1 src]# /usr/share/logstash/bin/logstash-plugin list #當前已經安裝的全部插件
[root@linux-host1 src]# /usr/share/logstash/bin/logstash-plugin install logstash-output-jdbc
time的默認值設置爲CURRENT_TIMESTAMP
[root@linux-host2 ~]# cat /etc/logstash/conf.d/redis-es.conf
input {
redis {
host => "192.168.56.12"
port => "6379"
db => "1"
key => "system-log-5612"
data_type => "list"
password => "123456"
}
redis {
host => "192.168.56.12"
port => "6379"
db => "0"
key => "tomcat-accesslog-5612"
data_type => "list"
password => "123456"
codec => "json"
}
}
output {
if [type] == "system-log-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-system-log-5612-%{+YYYY.MM.dd}"
}}
if [type] == "tomcat-accesslog-5612" {
elasticsearch {
hosts => ["192.168.56.12:9200"]
index => "logstash-tomcat-accesslog-5612-%{+YYYY.MM.dd}"
}
jdbc {
connection_string => "jdbc:mysql://192.168.56.11/elk?user=elk&password=123456&useUnicode=true&characterEncoding=UTF8"
statement => ["INSERT INTO elklog(host,clientip,status,AgentVersion) VALUES(?,?,?,?)", "host","clientip","status","AgentVersion"]
}}
}