ELK搭建以及使用大全

ELK搭建文檔

環境準備
系統:Centos6.8
ip: 192.168.137.52 node1
192.168.137.48 node2java

[root@node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.137.48 node2
192.168.137.52 node1
node1同上操做步驟node

elk準備環境倆臺徹底相同
一.elasticsearch安裝
[root@node2 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum源
[root@node2 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
1.安裝elasticsearch
[root@node2 ~]# yum install -y elasticsearch
2.logstash
下載並安裝GPG key
[root@node2 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum源
[root@node2 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
3.安裝logstash
[root@node2 ~]# yum install -y logstash
4.安裝kibana
[root@node2 ~]# cd /usr/local/src
[root@node2 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@node2 src]# tar zxf kibana-4.3.1-linux-x64.tar.gz
[root@node2 src]# mv kibana-4.3.1-linux-x64 /usr/local/
[root@node2 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana
5.安裝Redis,nginx和java
[root@node2 src]# yum install epel-release -y
[root@node2 src]# yum install -y redis nginx java
二.配置管理 elasticsearch
配置node1的elasticsearch,並受權
[root@node1 ~]# grep -n '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
17:cluster.name: check-cluster #判別節點是不是統一集羣
23:node.name: node1 #節點的hostname
33:path.data: /data/es-data #數據存放路徑
37:path.logs: /var/log/elasticsearch/ #日誌路徑
43:bootstrap.memory_lock: true #鎖住內存,使內存不會再swap中使用
54:network.host: 0.0.0.0 #容許訪問的ip
58:http.port: 9200 #端口
[root@node1 ~]# mkdir -p /data/es-data
[root@node1 ~]# chown elasticsearch.elasticsearch /data/es-data/
[root@node1 ~]# /etc/init.d/elasticsearch start
[root@node1 ~]# chkconfig elasticsearch on
[root@node1 ~]# netstat -lntup|grep 9200
tcp 0 0 :::9200 :::* LISTEN 2443/java
訪問 ip+9200端口就顯示出信息,若是不行檢查防火牆是否放通9200,例如:
ELK搭建以及使用大全
elasticsearch進行交互,使用restful apj進行交互,查看當前索引和分片狀況,稍後會有插件表示,以下:
[root@node1 ~]# curl -i -XGET 'http://192.168.137.52:9200/_count?pretty' -d '{mysql

"query" {
"match_all": {}
}
}'linux

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0, #索引0個
"_shards" : { #分區0個
"total" : 0,
"successful" : 0, #成功0個
"failed" : 0 #失敗0個
}
}
使用head插件顯示索引和分片狀況
[root@node1 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
在插件中添加一個index-demo/test的索引,提交請求 
http://192.168.137.52:9200/_plugin/head/
點擊複合索引 在下方第二行寫上index_demo/test 索引 ,空白出填寫輸出信息,以下:
ELK搭建以及使用大全
發送一個GET(固然可使用其餘類型請求)請求,查詢上述索引id填寫在索引行的類型後面,選擇get,提交請求
ELK搭建以及使用大全
在基本查詢中查看所建索引 ,點擊基本查詢,點擊搜素,就會信息信息
ELK搭建以及使用大全
三.管理node2的elasticsearch
將node1的配置文件拷貝到node2中,並修改配置文件node.name並受權 
配置文件中cluster.name的名字必定要一致,當集羣內節點啓動的時候,默認使用組播(多播),尋找集羣中的節點
root@node1 ~]# scp /etc/elasticsearch/elasticsearch.yml 192.168.137.48:/etc/elasticsearch/elasticsearch.yml
[root@node2 ~]# grep -n '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
17:cluster.name: check-cluster
23:node.name: node2
33:path.data: /data/es-data
37:path.logs: /var/log/elasticsearch/
43:bootstrap.memory_lock: true
54:network.host: 0.0.0.0
58:http.port: 9200
[root@node2 ~]# mkdir -p /data/es-data
[root@node2 ~]# chown elasticsearch.elasticsearch /data/es-data/
啓動elasticsearch
[root@node2 ~]# /etc/init.d/elasticsearch start
[root@node2 ~]# chkconfig elasticsearch on
在node2配置中添加以下內容,使用單播模式(嘗試了使用組播,可是不生效)
[root@node1 ~]# grep -n "^discovery" /etc/elasticsearch/elasticsearch.yml
68:discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[root@node2 ~]# grep -n "^discovery" /etc/elasticsearch/elasticsearch.yml
68:discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[root@node2 ~]# /etc/init.d/elasticsearch restart
[root@node1 ~]# /etc/init.d/elasticsearch restart
在瀏覽器中查看分片信息,一個索引默認被分紅了5個分片,每份數據被分紅了五個分片(能夠調節分片數量),下圖中外圍帶綠色框的爲主分片,不帶框的爲副本分片,主分片丟失,副本分片會複製一份成爲主分片,起到了高可用的做用,主副分片也可使用負載均衡加快查詢速度,可是若是主副本分片都丟失,則索引就是完全丟失。
http://192.168.137.52:9200/_plugin/head/
ELK搭建以及使用大全
使用kopf插件監控elasticsearch
http://192.168.137.52:9200/_plugin/kopf/#!/cluster
ELK搭建以及使用大全
[root@node1 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
點擊nodes,從下圖能夠看出節點的負載,cpu適應狀況,java對內存的使用(heap usage),磁盤使用,啓動時間 
http://192.168.137.52:9200/_plugin/kopf/#!/nodes
ELK搭建以及使用大全
除此以外,kopf插件還提供了REST API 等,相似kopf插件的還有bigdesk,可是bigdesk目前還不支持2.1!!!安裝bigdesk的方法以下
[root@node1 ~]# /usr/share/elasticsearch/bin/plugin install hlstudio/bigdesk
http://192.168.137.52:9200/_plugin/bigdesk/
ELK搭建以及使用大全
四.配置logstash
啓動一個logstash,-e:在命令行執行;input輸入,stdin標準輸入,是一個插件;output輸出,stdout:標準輸出
[root@node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
check #輸入
2018-05-22T12:43:12.064Z node1 check #自動輸出
www.baidu.com #輸入
2018-05-22T12:43:27.696Z node1 www.baidu.com #輸出
使用rubudebug顯示詳細輸出,codec爲一種編×××
[root@node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
check #輸入
{
"message" => "check",
"@version" => "1",
"@timestamp" => "2018-05-22T12:50:07.161Z",
"host" => "node1"
}
上述每一條輸出的內容稱爲一個事件,多個相同的輸出的內容合併到一塊兒稱爲一個事件(舉例:日誌中連續相同的日誌輸出稱爲一個事件)! 
使用logstash將信息寫入到elasticsearch
[root@node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.137.52:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
maliang
check
baidu.com
www.baidu.com
在elasticsearch中查看logstash新加的索引 
ELK搭建以及使用大全
在elasticsearch中寫一份,同時在本地輸出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定時備份到遠端一份了。此處使用的保留文本文件三大優點:1)文本最簡單 2)文本能夠二次加工 3)文本的壓縮比最高
[root@node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.137.52:9200"] } stdout{ codec => rubydebug } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
www.google.com
{
"message" => "www.google.com",
"@version" => "1",
"@timestamp" => "2018-05-22T13:03:48.940Z",
"host" => "node1"
}
www.elastic.co
{
"message" => "www.elastic.co",
"@version" => "1",
"@timestamp" => "2018-05-22T13:04:06.880Z",
"host" => "node1"
}
#使用logstash啓動一個配置文件,會在elasticsearch中寫一份
[root@node1 ~]# vim normal.conf
[root@node1 ~]# cat normal.conf
input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}nginx

[root@node1 ~]# /opt/logstash/bin/logstash -f normal.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
123 #輸入
{
"message" => "123",
"@version" => "1",
"@timestamp" => "2018-05-22T13:33:38.994Z",
"host" => "node1"
}
學習編寫conf格式
輸入插件配置,此處以file爲例,能夠設置多個
input {
file {
path => "/var/log/messages"
type => "syslog"
}git

file {
path => "/var/log/apache/access.log"
type => "apache"
}
}
介紹幾種收集文件的方式,可使用數組方式或者用匹配,也能夠寫多個path
path => ["/var/log/messages","/var/log/
.log"]
path => ["/data/mysql/mysql.log"]
設置boolean值
ssl_enable => true
文件大小單位
my_bytes => "1113" # 1113 bytes
my_bytes => "10MiB" # 10485760 bytes
my_bytes => "100kib" # 102400 bytes
my_bytes => "180 mb" # 180000000 bytes
jason收集 
codec => 「json」
hash收集
match => {
"field1" => "value1"
"field2" => "value2"
...
}
端口
port => 33
密碼
my_password => "password"
收集系統日誌的conf
經過input和output插件編寫conf文件
[root@node1 ~]# cat system.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.137.52:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
點擊數據瀏覽就看到
ELK搭建以及使用大全
收集elasticsearch的error日誌
此處把上個system日誌和這個error(java程序日誌)日誌,放在一塊兒。使用if判斷,兩種日誌分別寫到不一樣索引中.此處的type(固定的就是type,不可更改)不能夠和日誌格式的任何一個域(能夠理解爲字段)的名稱重複,也就是說日誌的域不能夠有type這個名稱
[root@node1 ~]# cat all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/elasticsearch/check-cluster.log"
type => "es-error"
start_position => "beginning"
}
}
output {github

if [type] == "system" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

if [type] == "es-error" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "es-error-%{+YYYY.MM.dd}"
    }
}

}正則表達式

ELK搭建以及使用大全
ELK搭建以及使用大全
ELK搭建以及使用大全

把多行整個報錯收集到一個事件中,舉例說明
以at.org開頭的內容都屬於同一個事件,可是顯示在不一樣行,這樣的日誌格式看起來很不方便,因此須要把他們合併到一個事件中,引入codec的multiline插件
官方文檔提供
input {
stdin {
codec => multiline {
pattern =&gt; "pattern, a regexp"<br/>negate =&gt; "true" or "false"<br/>what =&gt; "previous" or "next"
}
}
}
regrxp:使用正則,什麼狀況下把多行合併起來 
negate:正向匹配和反向匹配 
what:合併到當前行仍是下一行 
在標準輸入和標準輸出中測試以證實多行收集到一個日誌成功
[root@node1 ~]# cat muliline.conf
input {
stdin {
codec => multiline {
pattern => "^["
negate => true
what => "previous"
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
[root@node1 ~]# /opt/logstash/bin/logstash -f muliline.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
[1
[2
{
"@timestamp" => "2018-05-22T15:53:07.756Z",
"message" => "[1",
"@version" => "1",
"host" => "node1"
}
[3
{
"@timestamp" => "2018-05-22T15:53:14.942Z",
"message" => "[2",
"@version" => "1",
"host" => "node1"
}
繼續將上述實驗結果放到all.conf的es-error索引中
[root@node1 ~]# cat all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}redis

file {
    path => "/var/log/elasticsearch/check-cluster.log"
    type => "es-error"
    start_position => "beginning"
    codec => multiline {
        pattern => "^\["
        negate => true
        what => "previous"
    }
}

}
output {sql

if [type] == "system" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

if [type] == "es-error" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "es-error-%{+YYYY.MM.dd}"
    }
}

}
[root@node1 ~]# /opt/logstash/bin/logstash -f all.conf
五.熟悉kibana
編輯kibana配置文件使其生效
[root@node1 ~]# vim /usr/local/kibana/config/kibana.yml
[root@node1 ~]# grep '^[a-Z]' /usr/local/kibana/config/kibana.yml
server.port: 5601 #端口
server.host: "0.0.0.0" #開啓對外服務主機ip
elasticsearch.url: "http://192.168.137.52:9200" #填寫瀏覽器訪問elasticsearch的url
kibana.index: ".kibana" #在elasticsearch中添加.kibana索引
[root@node1 ~]# screen
[root@node1 ~]# /usr/local/kibana/bin/kibana
使用crtl +a+d退出screen
使用瀏覽器打開192.168.137.52:5601
驗證logstash的muliline插件生效,在kibana中添加一個logstash索引,若是添加索引後,discover如今不了,刪除新建索引,從新添加索引,並把前面倆個√都去掉就好了
ELK搭建以及使用大全
能夠看到默認的字段
ELK搭建以及使用大全
選擇discover查看 
ELK搭建以及使用大全
logstash收集nginx、syslog和tcp日誌
收集nginx的訪問日誌
在這裏使用codec的json插件將日誌的域進行分段,使用key-value的方式,使日誌格式更清晰,易於搜索,還能夠下降cpu的負載 
更改nginx的配置文件的日誌格式,使用json
[root@node1 ~]# sed -n '19,37p' /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{ "@timestamp": "$time_local", '
'"@fields": { '
'"remote_addr": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"request": "$request", '
'"request_method": "$request_method", '
'"http_referrer": "$http_referer", '
'"body_bytes_sent":"$body_bytes_sent", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"http_user_agent": "$http_user_agent" } }';

access_log /var/log/nginx/access.log json;
[root@node1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node1 ~]# service nginx restart
[root@node1 ~]# netstat -ntupl |grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 4091/nginx
tcp 0 0 :::80 :::
LISTEN 4091/nginx
日誌格式顯示以下:
瀏覽器輸入:192.168.137.52 連續刷新就有日誌出現
ELK搭建以及使用大全
使用logstash將nginx訪問日誌收集起來,繼續寫到all.conf中
[root@node1 ~]# cat all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}

file {
    path => "/var/log/elasticsearch/check-cluster.log"
    type => "es-error"
    start_position => "beginning"
    codec => multiline {
        pattern => "^\["
        negate => true
        what => "previous"
    }
}
   file {
    path => "/var/log/nginx/access_json.log"
    codec => json
    start_position => "beginning"
    type => "nginx-log"
}

}
output {

if [type] == "system" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

if [type] == "es-error" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "es-error-%{+YYYY.MM.dd}"
    }
}
 if [type] == "nginx-log" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "nginx-log-%{+YYYY.MM.dd}"
    }
}

}
ELK搭建以及使用大全
將nginx-log加入kibana中並顯示 
ELK搭建以及使用大全
收集系統syslog日誌,前文中已經使用文件file的形式收集了系統日誌/var/log/messages,可是實際生產環境是須要使用syslog插件直接收集 ,修改syslog的配置文件,把日誌信息發送到514端口上
[root@node1 ~]# vim /etc/rsyslog.conf
. @@192.168.137.52:514
[root@node1 ~]# /etc/init.d/rsyslog restart
[root@node1 ~]# cat all.conf
input {
syslog {
type => "system-syslog"
host => "192.168.137.52"
port => "514"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}

file {
    path => "/var/log/elasticsearch/check-cluster.log"
    type => "es-error"
    start_position => "beginning"
    codec => multiline {
        pattern => "^\["
        negate => true
        what => "previous"
    }
}
   file {
    path => "/var/log/nginx/access_json.log"
    codec => json
    start_position => "beginning"
    type => "nginx-log"
}

}
output {

if [type] == "system" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

if [type] == "es-error" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "es-error-%{+YYYY.MM.dd}"
    }
}
 if [type] == "nginx-log" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "nginx-log-%{+YYYY.MM.dd}"
    }
}
   if [type] == "system-syslog" {
    elasticsearch {
        hosts => ["192.168.137.52:9200"]
        index => "system-syslog-%{+YYYY.MM.dd}"
    }
}

}
[root@node1 ~]# /opt/logstash/bin/logstash -f all.conf
在elasticsearch插件中就可見到增長的system-syslog索引 
ELK搭建以及使用大全
使用redis收集logstash的信息,修改redis的配置文件並啓動redis
[root@node1 ~]# vim /etc/redis.conf
bind 192.168.137.52
daemonize yes
[root@node1 ~]# /etc/init.d/redis start
[root@node1 ~]# netstat -ntupl|grep redis
tcp 0 0 192.168.137.52:6379 0.0.0.0: LISTEN 5031/redis-server 1
編寫redis.conf
[root@node1 ~]# cat redis-out.conf
input{
stdin{
}
}
output{
redis{
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list" # 數據類型爲list
key => "demo"
}
}
啓動配置文件輸入信息
[root@node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
check
www.uc123.com
使用redis-cli鏈接到redis並查看輸入的信息
[root@node1 ~]# redis-cli -h 192.168.137.52
192.168.137.52:6379> select 6
OK
192.168.137.52:6379[6]> keys

1) "demo"
192.168.137.52:6379[6]> keys *
1) "demo"
192.168.137.52:6379[6]> lindex demo -2br/>"{\"message\":\"check\",\"@version\":\"1\",\"@timestamp\":\"2018-05-24T00:15:25.758Z\",\"host\":\"node1\"}"
192.168.137.52:6379[6]> lindex demo -1br/>"{\"message\":\"www.uc123.com\",\"@version\":\"1\",\"@timestamp\":\"2018-05-24T00:15:31.878Z\",\"host\":\"node1\"}"
192.168.137.52:6379[6]>
爲了下一步寫input插件到把消息發送到elasticsearch中,多在redis中寫入寫數據
[root@node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
check
www.uc123.com
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
查看redis中名字爲demo的key長度
192.168.137.52:6379[6]> llen demo
(integer) 28
使用redis發送消息到elasticsearch中,編寫redis-in.conf
[root@node1 ~]# cat redis-in.conf
input{
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "demo"
}
}
output{
elasticsearch {
hosts => ["192.168.137.52:9200"]
index => "redis-demo-%{+YYYY.MM.dd}"
}
}
啓動配置文件
[root@node1 ~]# /opt/logstash/bin/logstash -f redis-in.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
不斷刷新demo這個key的長度(讀取很快,刷新必定要速度)
192.168.137.52:6379[6]> llen demo
(integer) 24
192.168.137.52:6379[6]> llen demo
(integer) 24
192.168.137.52:6379[6]> llen demo
(integer) 23
192.168.137.52:6379[6]> llen demo
(integer) 3
192.168.137.52:6379[6]> llen demo
(integer) 3
192.168.137.52:6379[6]> llen demo
(integer) 3
192.168.137.52:6379[6]> llen demo
(integer) 3
192.168.137.52:6379[6]> llen demo
(integer) 0
192.168.137.52:6379[6]> llen demo
(integer) 0
在elasticsearch中查看增長了redis-demo
ELK搭建以及使用大全
將all.conf的內容改成經由redis,編寫shipper.conf做爲redis收集logstash配置文件
[root@node1 ~]# cp all.conf shipper.conf

[root@node1 ~]# vim shipper.conf
input {
syslog {
type => "system-syslog"
host => "192.168.137.52"
port => "514"
}
tcp {
type => "tcp-6666"
host => "192.168.137.52"
port => "6666"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/nginx/access_json.log"
codec => json
start_position => "beginning"
type => "nginx-log"
}
file {
path => "/var/log/elasticsearch/check-cluster.log"
type => "es-error"
start_position => "beginning"
codec => multiline {
pattern => "^["
negate => true
what => "previous"
}
}
}
output {
if [type] == "system" {
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "system"
}
}
if [type] == "es-error" {
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "es-error"
}
}
if [type] == "nginx-log" {
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "nginx-log"
}
}
if [type] == "system-syslog" {
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "system-syslog"
}
}
if [type] == "tcp-6666" {
redis {
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "tcp-6666"
}
}
}
在redis中查看keys
[root@node1 ~]# redis-cli -h 192.168.137.52
192.168.137.52:6379> select 6
OK
192.168.137.52:6379[6]> keys *
1) "system-syslog"
2) "es-error"
3) "system"
編寫indexer.conf做爲redis發送elasticsearch配置文件
[root@node1 ~]# cat indexer.conf
input {
redis {
type => "system-syslog"
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "system-syslog"
}
redis {
type => "tcp-6666"
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "tcp-6666"
}
redis {
type => "system"
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "system"
}
redis {
type => "nginx-log"
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "nginx-log"
}
redis {
type => "es-error"
host => "192.168.137.52"
port => "6379"
db => "6"
data_type => "list"
key => "es-error"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => "192.168.137.52"
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => "192.168.137.52"
index => "es-error-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => "192.168.137.52"
index => "nginx-log-%{+YYYY.MM.dd}"
}
}
if [type] == "system-syslog" {
elasticsearch {
hosts => "192.168.137.52"
index => "system-syslog-%{+YYYY.MM.dd}"
}
}
if [type] == "tcp-6666" {
elasticsearch {
hosts => "192.168.137.52"
index => "tcp-6666-%{+YYYY.MM.dd}"
}
}
}
啓動shipper.conf
[root@node1 ~]# /opt/logstash/bin/logstash -f shipper.conf
Settings: Default filter workers: 1
因爲日誌量小,很快就會所有被髮送到elasticsearch,key也就沒了,因此多寫寫數據到日誌中
[root@node1 ~]# for n in seq 10000 ;do echo $n >>/var/log/elasticsearch/check-cluster.log;done
[root@node1 ~]# for n in seq 10000 ;do echo $n >>/var/log/nginx/access.log;done
access.log
[root@node1 ~]# for n in seq 10000 ;do echo $n >>/var/log/nginx/access.log;done
[root@node1 ~]# for n in seq 10000 ;do echo $n >>/var/log/messages;done
查看key的長度看到key在增加
192.168.137.52:6379[6]> llen system
(integer) 24546
192.168.137.52:6379[6]> llen system
(integer) 30001
啓動indexer.conf
[root@node1 ~]# /opt/logstash/bin/logstash -f indexer.conf
#查看key的長度看到key在減少
192.168.137.52:6379[6]> llen system
(integer) 29990
192.168.137.52:6379[6]> llen system
(integer) 29958
192.168.137.52:6379[6]> llen system
(integer) 29732
學習logstash的fliter插件, 熟悉grok
filter插件有不少,在這裏就學習grok插件,使用正則匹配日誌裏的域來拆分。在實際生產中,apache日誌不支持jason,就只能使用grok插件匹配;mysql慢查詢日誌也是沒法拆分,只能使用grok正則表達式匹配拆分。 
在以下連接,github上有不少寫好的grok模板,能夠直接引用 
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns 
在裝好的logstash中也會有grok匹配規則,直接能夠引用,路徑以下
[root@node1 patterns]# pwd
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns
根據官方文檔提供的編寫grok.conf
[root@node1 ~]# cat grok.conf
input {
stdin {}
}
filter {
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
output {
stdout {
codec => "rubydebug"
}
}
啓動logstash,並根據官方文檔提供輸入,可獲得拆分結果以下顯示 
ELK搭建以及使用大全

使用logstash收集mysql慢查詢日誌
開啓慢查詢日誌
mysql> set global slow_query_log=ON;
mysql> set global slow_query_time=2;
查看
show variables like "%slow%";
倒入生產中mysql的slow日誌,示例格式以下:

Time: 2018-05-24T14:28:07.192372Z

User@Host: root[root] @ localhost [] Id: 3

Query_time: 20.000612 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0

SET timestamp=1527172087;
select sleep(20);
使用multiline處理,並編寫slow.conf
[root@node1 ~]# cat mysql-slow.conf
input{
file {
path => "/var/lib/mysql/node1-slow.log"
type => "mysql-slow-log"
start_position => "beginning"
codec => multiline {
pattern => "^# User@Host:"
negate => true
what => "previous"
}
}
}
filter {
grok {
match => { "message" =>"SELECT SLEEP" }
add_tag => [ "sleep_drop" ]
tag_on_failure => [] # prevent default _grokparsefailure tag on real records
}
if "sleep_drop" in [tags] {
drop {}
}
grok {
match => [ "message", "(?m)^# User@Host: %{USER:user}[[^]]+] @ (?:(?<clienthost>\S) )?[(?:%{IP:clientip})?]\s+Id: %{NUMBER:row_id:int}\s# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s(?:use %{DATA:database};\s)?SET timestamp=%{NUMBER:timestamp};\s(?<query>(?<action>\w+)\s+.)\n#\s*" ]
}
date {
match => [ "timestamp", "UNIX" ]
remove_field => [ "timestamp" ]
}
}
output {
stdout{
codec => "rubydebug"
}
}
執行該配置文件,查看grok正則匹配結果 
[root@node1 ~]# /opt/logstash/bin/logstash -f mysql-slow.conf
ELK搭建以及使用大全生產如何上線ELK。日誌分類系統日誌 rsyslog logstash syslog插件訪問日誌 nginx logstash codec json錯誤日誌 file logstash file+ mulitline運行日誌 file logstash codec json設備日誌 syslog logstash syslog插件debug日誌 file logstash json or mulitline日誌標準化1)路徑固定標準化2)格式儘可能使用json日誌收集步驟系統日誌開始->錯誤日誌->運行日誌->訪問日誌 注意 logstash 配置文件裏不能有特殊符號

相關文章
相關標籤/搜索