ELK服務基礎

官方文檔html

 

什麼是ELK?

 

  通俗來說,ELK是由Elasticsearch、Logstash、Kibana三個開源軟件組成的一個組合體,這三個軟件當中,每一個軟件用於完成不一樣的功能,ELK又稱爲ELK stack,官方域名爲static.co,ELK-stack的主要優勢有以下幾個:
處理方式靈活:elasticsearch是實時全文索引,具備強大的搜索功能。
配置相對簡單:elasticsearch所有使用JSON接口,logstash使用模板配置,kibana的配置文件部分更簡單。
檢索性能高效:基於優秀的設計,雖然每次查詢都是實時,可是也能夠達到百億級數據的查詢秒級響應。
集羣線性擴展:elasticsearch和logstash均可以靈活線性擴展。
前端操做絢麗:kibana的前端設計比較絢麗,並且操做簡單。前端

什麼是Elasticsearch:java

  是一個高度可擴展的開源全文搜索和分析引擎,它可實現數據的實時全文搜索、支持分佈式可實現高可用、提供API接口,能夠處理大規模日誌數據,好比Nginx、Tomcat、系統日誌等功能。node

什麼是Logstash:python

  能夠經過插件實現日誌收集和轉發,支持日誌過濾,支持普通log、自定義json格式的日誌解析。mysql

什麼是Kibana:linux

  主要是經過接口調用elasticsearch的數據,並進行前端數據可視化的展示。nginx

 

Beats 比 logstash 更輕量級,不須要裝java環境。c++

1. elasticsearch 部署

環境初始化git

最小化安裝 Centos-7.2-x86_64操做系統的虛擬機,vcpu-2,內存4G或更多,操做系統盤50G,主機名設置規則
爲linux-hostX.example.com,其中host1和host2爲elasticsearch服務器,爲保證效果特額外添加一塊單獨的
數據磁盤大小爲50G並格式化掛載到/data。

1.1 主機名和磁盤掛載

# 修改主機名
hostnamectl set-hostname linux-hostx.example.com && rebbot
hostnamectl set-hostname linux-host2.example.com && rebbot

# 磁盤掛載
mkdir /data
mkfs.xfs /dev/sdb 
blkid /dev/sdb
/dev/sdb: UUID="bb780805-efed-43ff-84cb-a0c59c6f4ef9" TYPE="xfs" 

vim /etc/fstab
UUID="bb780805-efed-43ff-84cb-a0c59c6f4ef9" /data xfs   defaults        0 0
mount -a

# 各服務器配置本地域名解析
vim /etc/hosts
192.168.182.137 linux-host1.example.com
192.168.182.138 linux-host2.example.com

 

1.2 關閉防火牆和SELinux,調整文件描述符

1.3 設置epel源、安裝基本操做命令並同步時間

yum install -y net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 添加計劃任務
echo "*/5 * * * * ntpdate time1.aliyun.com &>/dev/null && hwclock-w" >> /var/spool/cron/root
systemctl restart crond

1.4 安裝elasticsearch

在host1和host2分別安裝elasticsearch
在兩臺服務器準備java環境:
方式一:直接使用yum安裝openjdk
yum install java-1.8.0*
方式二:本地安裝在oracle官網下載rpm安裝包
yum localinstall jdk-8u92-linux-x64.rpm
方式三:下載二進制包自定義profile環境變量

tar xvf jdk-8u121-linux-x64.tar.gz -C /usr/local/
ln -sv /usr/local/jdk-8u121-linux-x64 /usr/local/jdk
vim /etc/profile

java -version

安裝elasticsearch

yum install jdk-8u121-linux-x64.rpm elasticsearch-5.4.0.rpm 

1.5 配置elasticsearch

grep  "^[a-Z]" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk-cluster
node.name: elk-node1
path.data: /data/elkdata
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 192.168.152.138
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.152.138", "192.168.152.139"]

另一個節點,只須要更改節點名稱和監聽地址便可:

grep  "^[a-Z]" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk-cluster
node.name: elk-node2
path.data: /data/elkdata
path.logs: /data/logs
bootstrap.memory_lock: true
network.host: 192.168.152.139
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.152.138", "192.168.152.139"]

建立數據和日誌目錄並受權:

mkdir /data/elkdata
mkdir /data/logs
chown -R elasticsearch.elasticsearch /data/

在啓動腳本中修改,開啓內存鎖定參數:

vim /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity

注意:不開啓內存鎖定參數,會由於 bootstrap.memory_lock: true 這個參數而啓不來。

調整內存大小,默認是2g:

vim /etc/elasticsearch/jvm.options 
-Xms2g
-Xmx2g

注意:

將Xmx設置爲不超過物理RAM的50%,以確保有足夠的物理RAM留給內核文件系統緩存。
elasticsearch內存最高不要超過32G。
https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

啓動服務:

systemctl restart elasticsearch.service 
systemctl enable elasticsearch.service

# 檢查狀態

curl -sXGET http://192.168.152.139:9200/_cluster/health?pretty=true

2.elasticsearch插件之head部署

插件是爲了完成不一樣的功能,官方提供了一些插件但大部分是收費的,另外也有一些開發愛好者提供的插件,能夠實現對elasticsearch集羣的狀態監控與管理配置等功能。

在elasticsearch5.x版本之後再也不支持直接安裝head插件,而是須要經過啓動一個服務方式,git地址:https://github.com/mobz/elasticsearch-head

# NPM 的全稱是Node-Package Manager,是隨同NodeJS一塊兒安裝的包管理和分發工具,它很方便讓JavaScript開發者下載、安裝、上傳以及管理以及安裝的包。

安裝部署:

cd /usr/local/src
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/
yum install npm -y
npm install grunt -save
ll node_modules/grunt # 確認生成文件
npm install # 執行安裝
npm run start & 後臺啓動服務

修改elasticsearch服務配置文件:

開啓跨域訪問支持,而後重啓elasticsearch服務

vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true #最下方添加
http.cors.allow-origin: "*"

重啓服務:

systemctl restart elasticsearch
systemctl enable elasticsearch

粗得是主分片,其餘的是副分片是細的 用於備份。

2.1 docker 版本啓動head插件

安裝docker:

yum install docker -y
systemctl start docker && systemctl enable docker

下載鏡像: 

docker run -p 9100:9100 mobz/elasticsearch-head:5

若是已有鏡像,則導入鏡像:

# docker load < elasticsearch-head-docker.tar.gz 

查看鏡像:

docker images

啓動鏡像:

docker run -d -p 9100:9100 docker.io/mobz/elasticsearch-head:5

監控腳本:

vim els-cluster-monitor.py 
#!/usr/bin/env python
#coding:utf-8

import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body=""
false="false"
obj=subprocess.Popen(("curl -sXGET http://192.168.152.139:9200/_cluster/health?pretty=true"),shell=True,stdout=subprocess.PIPE)
data=obj.stdout.read()
data1=eval(data)
status = data1.get("status")
if status == "green":
    print "50"
else:
    print "100"
注意:
若是經過head作數據瀏覽,
/var/lib/docker/overlay2/840b5e6d4ef64ecfdccfad5aa6d061a43f0efb10dfdff245033e90ce9b524f06/diff/usr/src/app/_site/vendor.js
/var/lib/docker/overlay2/048d9106359b9e263e74246c56193a5852db6a5b99e4a0f9dd438e657ced78d3/diff/usr/src/app/_site/vendor.js

更改application/x-www-form-urlencoded 爲 application/json

 

3.logstash部署

logstash環境準備及安裝:

Logstash是一個開源的數據收集引擎,能夠水平伸縮,並且logstash整個ELK當中擁有最多插件的一個組件,其能夠接收來自不一樣來源的數據並統一輸出到指定的且能夠是多個不一樣目的地。

環境準備:

關閉防火牆和selinux,而且安裝java環境

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
yum install jdk-8u121-linux-x64.rpm

安裝logstash:

yum install logstash-5.3.0.rpm -y

# 權限更改成logstash用戶和組,不然啓動的時候日誌報錯

chown logstash.logstash /usr/share/logstash/data/queue -R

測試logstash:

測試標準輸入和輸出:

/usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{stdout{ codec=>rubydebug}}' #標準輸入和輸出
hello
{
    "@timestamp" => 2017-11-18T13:49:41.425Z,	#當前事件的發生時間,
      "@version" => "1",	#事件版本號,一個事件就是一個ruby對象
          "host" => "linux-host2.example.com",	#標記事件發生在哪裏
       "message" => "hello"		#消息的具體內容
}

# 時間不用管它,瀏覽器會幫咱們轉換得。

# 壓縮文件

/usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{file{path=>"/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip=>true}}'

# 測試輸出到elasticsearch

/usr/share/logstash/bin/logstash -e 'input{ stdin{} } output{ elasticsearch {hosts => ["192.168.152.138:9200"] index => "logstash-test-%{+YYYY.MM.dd}"}}'

# 索引存放位置

ll /data/elkdata/nodes/0/indices/
total 0
drwxr-xr-x. 8 elasticsearch elasticsearch 65 Nov 18 20:17 W8VO0wNfTDy9h37CYpu17g

# 刪除索引有兩種方式:

一種是elasticsearch,一種是經過api

 

logstash配置文件之收集系統日誌:

# 配置文件說明:conf結尾,名字自定義,一個配置文件能夠收集多個日誌
vim /etc/logstash/conf.d/system.conf
input {
file {
   path => "/var/log/messages"
   type => "systemlog"		# 日誌類型
   start_position => "beginning"	#第一次從頭收集,以後重新添加的日誌收集
   stat_interval => "2"		# 多長時間去收集一次,兩秒
 }
}

output {
   elasticsearch {	# 定義插件名稱
    hosts => ["192.168.152.138:9200"]	
    index => "logstash-systemlog-%{+YYYY.MM.dd}"	# 爲何要加logstash,主要是後期再地圖上顯示客戶端顯示城市,模板上必需要以logstash開頭
  }
}

更改/var/log/messages權限:

# 因爲logstash對message沒有讀得權限
chmod 644 /var/log/messages

檢查配置文件是否有報錯:

# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
Configuration OK
15:43:39.440 [LogStash::Runner] INFO  logstash.runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
# 啓動服務:
systemctl restart logstash

# 啓動成功後,顯示logstash-systemlog:

# 添加到kibana:

4.kibana部署

kibana能夠單獨安裝到一臺服務器

yum install kibana-5.4.0-x86_64.rpm -y

更改配置文件:
grep "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601	#端口
server.host: "192.168.152.138"	#監聽地址
elasticsearch.url: "http://192.168.152.139:9200"	#URL地址

# 查看kibana狀態
http://192.168.152.138:5601/status

# 啓動kibana
systemctl restart kibana
systemctl enable kibana

# kibana 匹配:
[logstash-test]-YYYY.MM.DD

# 顯示

5. if判斷多個type類型

cat /etc/logstash/conf.d/system.conf 
input {
file {
   path => "/var/log/messages"
   type => "systemlog"
   start_position => "beginning"
   stat_interval => "2"
 }


file {
   path => "/var/log/lastlog"
   type => "system-last"
   start_position => "beginning"
   stat_interval => "2"
}}

output {
   if [type] == "systemlog"{
   elasticsearch {
    hosts => ["192.168.152.138:9200"]
    index => "logstash-systemlog-%{+YYYY.MM.dd}"
  }
   file{
    path => "/tmp/last.log"
 }}  
  if [type] == "system-last" {
   elasticsearch {
    hosts => ["192.168.152.138:9200"]
    index => "logstash-lastmlog-%{+YYYY.MM.dd}"
 }}
}

# 檢查配置是否正確
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
# 重啓服務
systemctl restart logstash

 

在elastic-head查看節點:

添加到kibana:

 

6.收集nginx訪問日誌

部署nginx服務:

編輯配置文件並準備web頁面:

# 添加到nginx.conf
vim conf/nginx.conf

# 添加json格式的日誌
log_format access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"url":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"status":"$status"}';
access_log  /var/log/nginx/access.log access_json;

# 添加站點
location /web{
    root html;
    index index.html index.htm;
}

# 建立目錄
mkdir /usr/local/nginx/html/web

# 首頁文件
echo 'Nginx webPage!' > /usr/local/nginx/html/web/index.html  

# 不stop,日誌格式會亂
/usr/local/nginx/sbin/nginx -s stop

# 受權
chown nginx.nginx /var/log/nginx

# 啓動
/usr/local/nginx/sbin/nginx 

# 查看訪問日誌
[root@linux-host2 conf]# tail -f /var/log/nginx/access.log 
{"@timestamp":"2017-11-20T23:51:00+08:00","host":"192.168.152.139","clientip":"192.168.152.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.152.139","url":"/web/index.html","domain":"192.168.152.139","xff":"-","referer":"-","status":"304"}

模擬訪問:

# 模擬多個訪問
yum install httpd-tools -y 
# 一千個請求,每次處理100個,共10次處理完。
ab -n1000 -c100 http://192.168.152.139/web/index.html

添加logstash配置:

vim /etc/logstash/conf.d/nginx-accesslog.conf   
input {
   file {
     path => "/var/log/nginx/access.log"
     type => "nginx-access-log"
     start_position => "beginning"
     stat_interval => "2"
   }
}

output {
   elasticsearch {
     hosts => ["192.168.152.139:9200"]
     index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
   }
}

# 檢查配置文件:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-accesslog.conf -t

# 重啓logstash
systemctl restart logstash

 

查看是否已添加到es:

 

 

添加到kibana:

 

7.Tomcat日誌轉json並收集

服務器部署tomcat服務:
安裝java環境,並自定義一個web界面進行測試。
配置java環境並部署Tomcat:

yum install jdk-8u121-linux-x64.rpm
cd /usr/local/src
[root@linux-host1 src]# tar -xf apache-tomcat-8.0.27.tar.gz 
[root@linux-host1 src]# cp -rf apache-tomcat-8.0.27 /usr/local/
[root@linux-host1 src]# ln -sv /usr/local/apache-tomcat-8.0.27/ /usr/local/tomcat
"/usr/local/tomcat" -> "/usr/local/apache-tomcat-8.0.27/"
[root@linux-host1 webapps]# mkdir /usr/local/tomcat/webapps/webdir
[root@linux-host1 webapps]# echo "Tomcat Page" > /usr/local/tomcat/webapps/webdir/index.html
[root@linux-host1 webapps]# ../bin/catalina.sh start
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Tomcat started.
[root@linux-host1 webapps]# ss -lnt | grep 8080
LISTEN     0      100         :::8080                    :::*  

配置tomcat的server.xml配置文件:

[root@linux-host1 conf]# diff server.xml server.xml.bak 
136,137c136,137
<                prefix="tomcat_access_log" suffix=".log"
<              pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Refere}i","Agentversion":"%{User-Agent}i"}"/>
---
>                prefix="localhost_access_log" suffix=".txt"
>                pattern="%h %l %u %t "%r" %s %b" />

[root@linux-host1 conf]# ../bin/startup.sh stop
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Tomcat started.
[root@linux-host1 conf]# ../bin/startup.sh start
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Tomcat started.

查看日誌:

[root@linux-host1 tomcat]# tail -f logs/tomcat_access_log.2017-11-21.log | jq
{
  "clientip": "192.168.152.1",
  "ClientUser": "-",
  "authenticated": "-",
  "AccessTime": "[21/Nov/2017:23:45:45 +0800]",
  "method": "GET /webdir2/ HTTP/1.1",
  "status": "304",
  "SendBytes": "-",
  "Query?string": "",
  "partner": "-",
  "Agentversion": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36"
}
{
  "clientip": "192.168.152.1",
  "ClientUser": "-",
  "authenticated": "-",
  "AccessTime": "[21/Nov/2017:23:45:45 +0800]",
  "method": "GET /webdir2/ HTTP/1.1",
  "status": "200",
  "SendBytes": "13",
  "Query?string": "",
  "partner": "-",
  "Agentversion": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36"
}

添加logstash配置:

[root@linux-host2 ~]# vim /etc/logstash/conf.d/tomcat_access.conf 
input {
   file {
     path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"
     type => "tomcat-accesslog"
     start_position => "beginning"
     stat_interval => "2"
   }
}

output {
   if [type] == "tomcat-accesslog" {
   elasticsearch {
     hosts => ["192.168.152.138:9200"]
     index => "logstash-tomcat152139-accesslog-%{+YYYY.MM.dd}"
   }}
}


# 檢查配置文件:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-accesslog.conf -t

注意:

path => "/usr/local/tomcat/logs/tomcat_access_log.*.log "中必定不要有空格,否則會找不到索引,血得教訓。

path日誌 * 表明匹配全部日誌,若是須要直觀定位哪臺機器的索引,能夠添加後兩位的ip地址。

查看es:

添加到kiban:

 

測試併發:

[root@linux-host2 tomcat]# ab -n10000 -c100 http://192.168.152.139:8080/webdir/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.152.139 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Apache-Coyote/1.1
Server Hostname:        192.168.152.139
Server Port:            8080

Document Path:          /webdir/index.html
Document Length:        12 bytes

Concurrency Level:      100
Time taken for tests:   17.607 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2550000 bytes
HTML transferred:       120000 bytes
Requests per second:    567.96 [#/sec] (mean)
Time per request:       176.068 [ms] (mean)
Time per request:       1.761 [ms] (mean, across all concurrent requests)
Transfer rate:          141.44 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   22  24.1     11     158
Processing:    19  154 117.4    116    2218
Waiting:        1  141 113.7     95    2129
Total:         19  175 113.6    142    2226

Percentage of the requests served within a certain time (ms)
  50%    142
  66%    171
  75%    204
  80%    228
  90%    307
  95%    380
  98%    475
  99%    523
 100%   2226 (longest request)

8.收集java日誌

使用codec的multiline插件實現多行匹配,這是一個能夠將多行進行合併的插件,並且可使用what指定將匹配到的行與前面的行合併仍是和後面的行合併,
https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

在elasticsearch服務器部署logstash示例:

chown logstash.logstash /usr/share/logstash/data/queue -R
ll -d /usr/share/logstash/data/queue
cat /etc/logstash/conf.d/java.conf
input{
	stdin{
	codec=>multiline{
	pattern=>"^\["	#當遇到[開頭的行時候將多行進行合併
	negate=>true #true 爲匹配成功進行操做,false爲不成功進行操做
	what=>"previous" #與上面的行合併,若是是下面的行合併就是next
	}}
}
filter{ #日誌過濾,若是全部的日誌都過濾就寫這裏,若是隻針對某一個過濾就寫在input裏面的日誌輸入裏面
}
output{
	stdout{
	codec=>rubydebug
}}

測試匹配代碼:

/usr/share/logstash/bin/logstash -e 'input { stdin { codec => multiline { pattern => "^\[" negate => true what => "previous" }}} output { stdout { codec => rubydebug}}'

注意:若是匹配空行,使用$

測試匹配輸出:

日誌格式:

[root@linux-host1 ~]# tail /data/logs/elk-cluster.log 
[2017-11-23T00:11:09,559][INFO ][o.e.c.m.MetaDataMappingService] [elk-node1] [logstash-nginx-access-log-2017.11.22/N8AF_HmTSiqBiX7pNulkYw] create_mapping [elasticsearch-java-log]
[2017-11-23T00:11:10,777][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk-node1] [elasticsearch-java-log-2017.11.22] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2017-11-23T00:11:11,881][INFO ][o.e.c.m.MetaDataMappingService] [elk-node1] [elasticsearch-java-log-2017.11.22/S5LpdLyDRCq3ozqVnJnyBg] create_mapping [elasticsearch-java-log]
[2017-11-23T00:11:12,418][INFO ][o.e.c.r.a.AllocationService] [elk-node1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[elasticsearch-java-log-2017.11.22][3]] ...]).

生產配置文件:

vim /etc/logstash/conf.d/java.conf 
input {
   file {
     path => "/data/logs/elk-cluster.log"
     type => "elasticsearch-java-log"
     start_position => "beginning"
     stat_interval => "2"
     codec => multiline
     { pattern => "^\["
     negate => true
     what => "previous" }
}}

output {
   if [type] == "elasticsearch-java-log" {
   elasticsearch {
     hosts => ["192.168.152.138:9200"]
     index => "elasticsearch-java-log-%{+YYYY.MM.dd}"
   }}
}

驗證語法:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
Configuration OK
00:06:47.228 [LogStash::Runner] INFO  logstash.runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

重啓服務:

systemctl restart logstash

 

查看es狀態:

添加到kibana:

kibana展現:

9.收集TCP日誌

若是一些日誌丟失,能夠經過這種方式來進行了補一些日誌。
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html

# 測試配置文件

vim /etc/logstash/conf.d/tcp.conf
input {
	tcp {
		port => 5600
		mode => "server"
		type => "tcplog"
	}
} 

output {
	stdout {
		codec => rubydebug
	}
}

# 驗證配置是否正確語法

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf -t

# 啓動

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf

在其餘服務器安裝nc命令:

NetCat簡稱nc,在網絡工具中有「瑞士軍刀」美譽,其功能實用,是一個簡單、可靠的網絡工具,可經過TCP或UDP協議傳輸讀寫數據,另外還具備不少其餘功能。

yum install nc -y

# 發送數據

echo "nc test"|nc 192.168.56.16 9889

驗證logstash是否接收到數據:

{
    "@timestamp" => 2017-11-23T15:36:50.938Z,
          "port" => 34082,
      "@version" => "1",
          "host" => "192.168.152.138",
       "message" => "tcpdata",
          "type" => "tcplog"
}

經過nc命令發送一個文件:

nc 192.168.152.139 5600 < /etc/passwd

經過僞設備的方式發送消息:

在類Unix操做系統中,設備節點並不必定要對應物理設備。沒有這種對應關係的設備是僞設備。操做系統運用了它們提供的多種功能,tcp只是dev下面衆多僞設備當中的一種設備。

echo "僞設備" > /dev/tcp/192.168.152.139/5600
echo "2222" > /dev/tcp/192.168.152.139/5600

生產配置:

vim /etc/logstash/conf.d/tomcat_tcp.conf 
input {
   file {
     path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"
     type => "tomcat-accesslog"
     start_position => "beginning"
     stat_interval => "2"
   }
   tcp {
         port => 5600
         mode => "server"
         type => "tcplog"
   }
}

output {
   if [type] == "tomcat-accesslog" {
   elasticsearch {
     hosts => ["192.168.152.138:9200"]
     index => "logstash-tomcat152139-accesslog-%{+YYYY.MM.dd}"
   }}
   if [type] == "tcplog" {
   elasticsearch {
     hosts => ["192.168.152.138:9200"]
     index => "tcplog-test152139-%{+YYYY.MM.dd}"
   }}
}

 

查看ES:

查看kibana:

發送數據:

 

 

11.架構規劃

  在下面的圖當中從左向右看,當要訪問ELK日誌統計平臺的時候,首先訪問的是兩天Nginx+keepalived作的負載高可用,訪問的地址是keepalived的IP,當一臺nginx代理服務器掛掉以後也不影響訪問,而後nginx將請求轉發到kibana,kibana再去elasticsearch獲取數據,elasticsearch是兩臺作的集羣,數據會隨機保存在任意一臺elasticsearch服務器,redis服務器作數據的臨時保存,避免web服務器日誌量過大的時候形成的數據收集與保存不一致致使的日誌丟失,能夠臨時保存到redis,redis能夠是集羣,而後再由logstash服務器在非高峯時期從redis持續的取出便可,另外有一臺mysql數據庫服務器,用於持久化保存特定的數據,web服務器的日誌由filebeat收集以後發送給另外的一臺logstash,再有其寫入到redis便可完成日誌的收集,從圖中能夠看出,redis服務器處於前端結合的最中間,其左右都要依賴於redis的正常運行,那麼咱們就先從部署redis開始,而後將日誌從web服務器收集到redis,在安裝elasticsearch、kibana和從redis提取日誌的logstash。

 

12. logstash收集日誌並寫入redis

用一臺服務器按照部署redis服務,專門用於日誌緩存使用,用於web服務器產生大量日誌的場景,例以下面的服務器內存即將被使用完畢,查看是由於redis服務保存了大量的數據沒有被讀取而佔用了大量的內存空間。
若是佔用內存太多,這時候須要添加logstash服務器了,增長讀取速度。

安裝並配置redis:

redis安裝參考連接

ln -sv /usr/local/src/redis-4.0.6 /usr/local/redis
cp src/redis-server /usr/bin/
cp src/redis-cli /usr/bin/

bind 192.168.152.139
daemonize yes   # 容許後臺啓動
# 打開save "",save 所有禁止
save ""
#save 900 1
#save 300 10
#save 60 10000
# 開啓認證
requirepass 123456 

啓動:
redis-server /usr/local/redis/redis.conf

測試:
[root@linux-host2 redis-4.0.6]# redis-cli -h 192.168.152.139
192.168.152.139:6379> KEYS *
(error) NOAUTH Authentication required.
192.168.152.139:6379> auth 123456
OK
192.168.152.139:6379> KEYS
(error) ERR wrong number of arguments for 'keys' command
192.168.152.139:6379> KEYS *
(empty list or set)
192.168.152.139:6379> 

配置logstash將日誌寫入至redis:

將tomcat服務器的logstash收集以後的tomcat訪問日誌寫入到redis服務器,而後經過另外的logstash將redis服務器的數據取出再寫入elasticsearch服務器。

官方文檔:
www.elastic.co/guide/en/logstash/current/plugins-outputs-redis.html

redis-cli -h 192.168.152.139 -a 123456
LLEN rsyslog-5612
LPOP rsyslog-5612 # 彈一條

查看數據:
redis-cli -h 192.168.152.139 -a 123456
#查詢數據
SELECT 1
#查看數據
KEYS *

 

logstash配置:

input {
  redis {
	data_type => "list"
	host => "192.168.152.139"
	db => "1"
	port => "6379"
	key => "rsyslog-5612"
	password => "123456"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.152.139:9200"]
	index => "redis-rsyslog-5612-%{+YYYY.MM.dd}"
  }
}

 

待補充:

經過rsyslog收集haproxy日誌:
在centos 6及以前的版本叫作syslog,centos7開始叫作rsyslog,根據官方的介紹,rsyslog(2013年版本)能夠達到每秒轉發百萬條日誌的級別,官方網址http://www.rsyslog.com/,確認系統安裝的版本命令以下:

安裝:
yum install gcc gcc-c++ pcre pcre-devel openssl openss-devel -y

make TARGET=linux2628 USER_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy

make install PREFIX=/usr/local/haproxy

# 查看版本
/usr/local/haproxy/sbin/haproxy -v

準備啓動腳本:
vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/sysconfig/haproxy.cfg -p /run/haproxy.pid $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID

[Install]
WantedBy=multi-user.target


[root@linux-host2 haproxy-1.7.9]# cp /usr/local/src/haproxy-1.7.9/haproxy-systemd-wrapper /usr/sbin/
[root@linux-host2 haproxy-1.7.9]# cp /usr/local/src/haproxy-1.7.9/haproxy /usr/sbin/

vim /etc/sysconfig/haproxy #系統級配置文件
OPTIONS=""

mkdir /etc/haproxy

vim /etc/sysconfig/haproxy.cfg
global
maxconn 100000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/run/haproxy.pid
log 127.0.0.1 local6 info

defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client	300000ms
timeout server 	300000ms

listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri	/haproxy-status
stats auth 	headmin:123456

#frontend web_port
frontend web_port
	bind 0.0.0.0:80
	mode http
	option httplog
	log global
	option forwardfor
	
###################ACL Setting###################
	acl pc 			hdr_dom(host) -i www.elk.com
	acl mobile		hdr_dom(host) -i m.elk.com
###################USE ACL ######################
	use_backend		pc_host		if pc
	use_backend		mobile_host	if mobile
#################################################

backend pc_host
	mode	http
	option	httplog
	balance	source
	server	web1 192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1 


backend mobile_host
	mode	http
	option 	httplog
	balance source
	server web1	192.168.56.11:80 check inter 2000 rise 3 fall 2 weight 1
	


vim /etc/rsyslog.conf    
$ModLoad imudp
$UDPServerRun 514

$ModLoad imtcp
$InputTCPServerRun 514

local6.* 	@@192.168.152.139:5160


從新啓動rsyslog服務:
systemctl restart rsyslog

input{
 syslog {
   type => "rsyslog-5612"
   port => "5160"
 }
}

output {
  stdout {
     codec => rubydebug
  }
}

###########################
input{
 syslog {
   type => "rsyslog-5612"
   port => "5160"
 }
}
output {
 if [type] == "rsyslog-5612"{
   elasticsearch {
     hosts => ["192.168.152.139:9200"]
	 index => "rsyslog-5612-%{+YYYY.MM.dd}"
   }}
}

使用filebeat替代logstash收集日誌

Filebeat是輕量級單用途的日誌收集工具,用於在沒有安裝java的服務器上專門收集日誌,能夠將日誌轉發到logstash、elasticsearch或redis等場景中進行下一步處理。
官網下載地址:https://www.elastic.co/downloads/beats/filebeat
官方文檔:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html
 
肯定日誌格式爲json格式:
先訪問web服務器,以產生必定的日誌,而後確認是json格式:
ab -n100 -c100 http://192.168.56.16:8080/web
 
安裝:
yum -y install filebeat-5.4.0-x86_64.rpm
 
 
https://www.elastic/guide/en/beats/filebeat/current/filebeat-configuration-details.html
 
 
[root@linux-host2 src]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/*.log
    - /var/log/messages
  exclude_lines: ["^DBG","^$"]  #若是有空行,日誌往數據寫會報錯
  document_type: system-log-5612 #日誌類型
output.file:
  path: "/tmp"
  name: "filebeat.txt"
[root@linux-host2 src]# systemctl restart filebeat
 
測試使用echo形式:
echo "test" >> /var/log/messages
 
[root@linux-host2 src]# tail -f /tmp/filebeat
{"@timestamp":"2017-12-21T15:45:05.715Z","beat":{"hostname":"linux-host2.example.com","name":"linux-host2.example.com","version":"5.4.0"},"input_type":"log","message":"Dec 21 23:45:01 linux-host2 systemd: Starting Session 9 of user root.","offset":1680721,"source":"/var/log/messages","type":"system-log-5612"}
 
logstash收集日誌並吸入redis:
 
輸出到redis:
output.redis:
  hosts: ["192.168.56.12:6379"]
  db: "1" #使用第幾個庫
  timeout: "5" #超時時間
  password: "123456" #redis密碼
  key: "system-log-5612" #爲了後期日誌處理,建議自定義
 
查看數據:
SELECT 3
KYES *
LLEN system-log-5612
RPOP system-log-5612
 
 
從redis取日誌:
input {
  redis {
      data_type => "list"
      host => "172.20.8.13"
      db => "1"
      port => "6379"
      key => "system-log-0840"
      password => "123456"
  }
}
output {
  if [type] == "system-log-0840" {
    elasticsearch {
      hosts => ["172.20.8.12:9200"]
      index => "system-log-0840-%{+YYYY.MM.dd}"
    }
  }
}
 
logstash 通常是每秒幾百行的數據,redis每秒鐘上百萬行數據

監控redis數據長度

 

實際環境當中,可能會出現redis當中堆積了大量的數據而logstash因爲種種緣由未能及時提取日誌,此時會致使redis服務器的內存被大量使用,甚至出現以下內存即將被使用完畢的情景:
查看redis中的日誌隊列長度發現有大量的日誌堆積在redis當中:

安裝redis模塊:
yum install python-pip -y
pip install redis

報警腳本:
#!/usr/bin/env python
#coding:utf-8
#Author
import redis
def redis_conn():
    pool = redis.ConnectionPool(host="192.168.56.12",port=6379,db=1,password=123456)
	conn = redis.Redis(connection_pool=pool)
	data = conn.llen('tomcat-accesslog-5612')
	print(data)
redis_conn()

 結合logstash進行輸出測試

vim beats.conf
input{
    beats{
	    port => 5044
	}
}

output{
    stdout {
	    codec => rubydebug
	}
}

#將輸出改成文件進行臨時輸出測試
output{
    file{
	    path => "/tmp/filebeat.txt"
	}
}


filebeat配置文件由redis更改成logstash:
output.logstash:
  hosts: ["192.168.56.11:5044"] #logstash 服務器地址,能夠是多個
  enabled: true #是否開啓輸出至logstash,默認即爲true
  worker: 2 #工做線程數
  compression_level: 3 #壓縮級別
  loadbalance: true #多個輸出的時候開啓負載



配置logstash的配置文件收集beats的文件,再存入redis:
vim beats.conf
input{
    beats{
	    port => 5044
	}
}

output{
  if [type] == "filebeat-system-log-5612"{
  redis {
      data_type => "list"
	  host => "192.168.56.12"
	  db => "3"
	  port => "6379"
	  key => "filebeat-system-log-5612-%{+YYYY.MM.dd}"
	  password => "123456"
  }}
}


由redis中取數據,並寫入elastsearch:
vim redis-es.yaml
input {
  redis {
      data_type => "list"
	  host => "192.168.56.12"
	  db = > "3"
	  port => "6379"
	  key => "filebeat-system1-log-5612"
	  password => "123456"
  }
}

output {
  if [type] == "filebeat-system1-log-5612" {
  elasticsearch {
    hosts => ["192.168.56.11:9200"]
	index => "filebeat-system1-log-5612-%{+YYYY.MM.dd}"
  }}
}

filebeat收集tomcat日誌

filebeat配置中添加以下配置:
- input_type: log
  paths:
    - /usr/local/tomcat/logs/tomcat_access_log.*.log
  document_type: tomcat-accesslog-5612
 
 
grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/messages
    - /var/log/*.log
  exclude_lines: ["^DBG","^$"]
  document_type: filebeat-system-log-5612
- input_type: log
  paths:
    - /usr/local/tomcat/logs/tomcat_access_log.*.log
  document_type: tomcat-accesslog-5612
output.logstash:
  hosts: ["192.168.56.11:5044"]
  enabled: true
  worker: 2
  compression_level: 3
 
logstash收集redis中的日誌,傳給redis:
vim beats.conf
input{
    beats{
        port => 5044
    }
}
 
output{
  if [type] == "filebeat-system-log-5612"{
  redis {
      data_type => "list"
      host => "192.168.56.12"
      db => "3"
      port => "6379"
      key => "filebeat-system-log-5612-%{+YYYY.MM.dd}"
      password => "123456"
  }}
  if [type] == "tomcat-accesslog-5612" {
      redis {
      data_type => "list"
      host => "192.168.56.12"
      db => "4"
      port => "6379"
      key => "tomcat-accesslog-5612"
      password => "123456"
  }}
}
 
LPOP驗證一下redis:
 
 
由redis中取數據,並寫入elastsearch:
vim redis-es.yaml
input {
  redis {
      data_type => "list"
      host => "192.168.56.12"
      db => "3"
      port => "6379"
      key => "filebeat-system1-log-5612"
      password => "123456"
  }
  redis {
      data_type => "list"
      host => "192.168.56.12"
      db => "4"
      port => "6379"
      key => "tomcat-accesslog-5612"
      password => "123456"
  }
}
 
output {
  if [type] == "filebeat-system1-log-5612" {
  elasticsearch {
    hosts => ["192.168.56.11:9200"]
    index => "filebeat-system1-log-5612-%{+YYYY.MM.dd}"
  }}
  if [type] == "tomcat-accesslog-5612" {
  elasticsearch {
    hosts => ["192.168.56.12:9200"]
    index => "tomcat-accesslog-5612-%{+YYYY.MM.dd}"
  }}
}
 
 
添加到kibana:

添加代理

添加haproxy代理:

##################ACL Setting#################
	acl pc		hdr_dom(host) -i www.elk.com
	acl mobile	hdr_dom(host) -i m.elk.com
	acl kibana	hdr_dom(host) -i www.kibana5612.com
##################USE ACL######################
	use_backend	pc_host			if pc
	use_backend mobile_host		if mobile
	use_backend kibana_host		if kibana
###############################################

backend kibana_host
	mode http
	option httplog
	balance source
	server web1 127.0.0.1:5601 check inter 2000 rise 3 fall 2 weight 1

kibana配置:
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://192.168.56.12:9200"

systemctl start kibana
systemctl enable kibana


Nginx代理並受權:
vim nginx.conf 
include /usr/local/nginx/conf/conf.d/*.conf;

vim /usr/local/nginx/conf/conf.d/kibana5612.conf
upstream kibana_server {
	server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
}

server {
	listen 80;
	server_name www.kibana5611.com;
	location /{
	proxy_pass http://kibana_server;
	proxy_http_version 1.1;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header Connection 'upgrade';
	proxy_set_header Host $host;
	proxy_cache_bypass $http_upgrade;
	}
}

yum install httpd-tools -y
#第一次須要加-c
htpasswd -bc /usr/local/nginx/conf/htpasswd.users luchuangao 123456
#第二次須要把-c去掉,不然會覆蓋原有得。
htpasswd -b /usr/local/nginx/conf/htpasswd.users luchuangao 123456
#查看tail /usr/local/nginx/conf/htpasswd.users
...
#受權
chown nginx.nginx /usr/local/nginx/conf/htpasswd.users
#重啓服務
/usr/local/nginx/sbin/nginx -s reload

添加進nginx配置文件:
vim /usr/local/nginx/conf/conf.d/kibana5612.conf
upstream kibana_server {
	server 127.0.0.1:5601 weight=1 max_fails=3 fail_timeout=60;
}

server {
	listen 80;
	server_name www.kibana5611.com;
	auth_basic "Restricted Access";
	auth_basic_user_file /usr/local/nginx/conf/htpasswd.users;
	location /{
	proxy_pass http://kibana_server;
	proxy_http_version 1.1;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header Connection 'upgrade';
	proxy_set_header Host $host;
	proxy_cache_bypass $http_upgrade;
	}
}

 

elk定時刪除索引

http://www.iyunw.cn/archives/elk-mei-ri-qing-chu-30-tian-suo-yin-jiao-ben/

相關文章
相關標籤/搜索