1、ELK日誌分析工具介紹
1) Elasticsearch
1.1) Elasticsearch介紹
ElasticSearch是一個基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文搜索引擎,基於RESTful web接口。Elasticsearch是用Java開發的,並做爲Apache許可條款下的開放源碼發佈,是第二流行的企業搜索引擎。設計用於雲計算中,可以達到實時搜索,穩定,可靠,快速,安裝使用方便。php
1.2) Elasticsearch幾個重要術語
- NRT
elasticsearch是一個近似實時的搜索平臺,從索引文檔到可搜索有些延遲,一般爲1秒。
- 集羣
集羣就是一個或多個節點存儲數據,其中一個節點爲主節點,這個主節點是能夠經過選舉產生的,並提供跨節點的聯合索引和搜索的功能。集羣有一個惟一性標示的名字,默認是elasticsearch,集羣名字很重要,每一個節點是基於集羣名字加入到其集羣中的。所以,確保在不一樣環境中使用不一樣的集羣名字。一個集羣能夠只有一個節點。強烈建議在配置elasticsearch時,配置成集羣模式。
- 節點
節點就是一臺單一的服務器,是集羣的一部分,存儲數據並參與集羣的索引和搜索功能。像集羣同樣,節點也是經過名字來標識,默認是在節點啓動時隨機分配的字符名。固然啦,你能夠本身定義。該名字也蠻重要的,在集羣中用於識別服務器對應的節點。
節點能夠經過指定集羣名字來加入到集羣中。默認狀況下,每一個節點被設置成加入到elasticsearch集羣。若是啓動了多個節點,假設能自動發現對方,他們將會自動組建一個名爲elasticsearch的集羣。
- 索引
索引是有幾分類似屬性的一系列文檔的集合。如nginx日誌索引、syslog索引等等。索引是由名字標識,名字必須所有小寫。這個名字用來進行索引、搜索、更新和刪除文檔的操做。
索引相對於關係型數據庫的庫。
- 類型
在一個索引中,能夠定義一個或多個類型。類型是一個邏輯類別仍是分區徹底取決於你。一般狀況下,一個類型被定於成具備一組共同字段的文檔。如ttlsa運維生成時間全部的數據存入在一個單一的名爲logstash-ttlsa的索引中,同時,定義了用戶數據類型,帖子數據類型和評論類型。
類型相對於關係型數據庫的表。
- 文檔
文檔是信息的基本單元,能夠被索引的。文檔是以JSON格式表現的。
在類型中,能夠根據需求存儲多個文檔。
雖然一個文檔在物理上位於一個索引,實際上一個文檔必須在一個索引內被索引和分配一個類型。
文檔相對於關係型數據庫的列。
- 分片和副本
在實際狀況下,索引存儲的數據可能超過單個節點的硬件限制。如一個十億文檔需1TB空間可能不適合存儲在單個節點的磁盤上,或者從單個節點搜索請求太慢了。爲了解決這個問題,elasticsearch提供將索引分紅多個分片的功能。當在建立索引時,能夠定義想要分片的數量。每個分片就是一個全功能的獨立的索引,能夠位於集羣中任何節點上。
分片的兩個最主要緣由:
a) 水平分割擴展,增大存儲量;
b) 分佈式並行跨分片操做,提升性能和吞吐量;java
分佈式分片的機制和搜索請求的文檔如何彙總徹底是有elasticsearch控制的,這些對用戶而言是透明的。
網絡問題等等其它問題能夠在任什麼時候候不期而至,爲了健壯性,強烈建議要有一個故障切換機制,不管何種故障以防止分片或者節點不可用。
爲此,elasticsearch讓咱們將索引分片複製一份或多份,稱之爲分片副本或副本。
副本也有兩個最主要緣由:
a) 高可用性,以應對分片或者節點故障。出於這個緣由,分片副本要在不一樣的節點上。
b) 提供性能,增大吞吐量,搜索能夠並行在全部副本上執行。
總之,每個索引能夠被分紅多個分片。索引也能夠有0個或多個副本。複製後,每一個索引都有主分片(母分片)和複製分片(複製於母分片)。分片和副本數量能夠在每一個索引被建立時定義。索引建立後,能夠在任什麼時候候動態的更改副本數量,可是,不能改變分片數。
默認狀況下,elasticsearch爲每一個索引分片5個主分片和1個副本,這就意味着集羣至少須要2個節點。索引將會有5個主分片和5個副本(1個完整副本),每一個索引總共有10個分片。 每一個elasticsearch分片是一個Lucene索引。一個單個Lucene索引有最大的文檔數LUCENE-5843, 文檔數限制爲2147483519(MAX_VALUE – 128)。 可經過_cat/shards來監控分片大小。node
2) Logstash
2.1) Logstash 介紹
LogStash由JRuby語言編寫,基於消息(message-based)的簡單架構,並運行在Java虛擬機(JVM)上。不一樣於分離的代理端(agent)或主機端(server),LogStash可配置單一的代理端(agent)與其它開源軟件結合,以實現不一樣的功能。mysql
2.2) LogStash的四大組件
Shipper:發送事件(events)至LogStash;一般,遠程代理端(agent)只須要運行這個組件便可;
Broker and Indexer:接收並索引化事件;
Search and Storage:容許對事件進行搜索和存儲;
Web Interface:基於Web的展現界面
正是因爲以上組件在LogStash架構中可獨立部署,才提供了更好的集羣擴展性。linux
2.3) LogStash主機分類
代理主機(agent host):做爲事件的傳遞者(shipper),將各類日誌數據發送至中心主機;只需運行Logstash 代理(agent)程序;
中心主機(central host):可運行包括中間轉發器(Broker)、索引器(Indexer)、搜索和存儲器(Search and Storage)、Web界面端(Web Interface)在內的各個組件,以實現對日誌數據的接收、處理和存儲。nginx
3) Kibana
Logstash是一個徹底開源的工具,他能夠對你的日誌進行收集、分析,並將其存儲供之後使用(如,搜索),您可使用它。說到搜索,logstash帶有一個web界面,搜索和展現全部日誌。git
2、使用ELK日誌分析工具的必要性(解決運維痛點)
- 開發人員不能登陸線上服務器查看詳細日誌;
- 各個系統都有日誌,日誌數據分散難以查找;
- 日誌數據量大,查詢速度慢,或者數據不夠實時;github
3、ELK日誌分析環境部署
1) 準備機器web
1
2
3
|
兩臺虛擬機:
hostname
:linux-node1 和 linux-node2
ip地址:192.168.56.11 和 192.168.56.22
|
2) 系統環境(兩臺徹底一致)正則表達式
1
2
3
4
5
6
7
8
9
10
11
|
[root@linux-node2 ~]
# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@linux-node2 ~]
# uname -a
Linux linux-node2 3.10.0-229.el7.x86_64
#1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@linux-node2 ~]
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.11 linux-node1.oldboyedu.com linux-node1
192.168.56.12 linux-node2.oldboyedu.com linux-node2
|
3) elk準備環境(兩臺徹底一致)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
1) elasticsearch安裝
下載並安裝GPG key
[root@linux-node2 ~]
# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum倉庫
[root@linux-node2 ~]
# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository
for
2.x packages
baseurl=http:
//packages
.elastic.co
/elasticsearch/2
.x
/centos
gpgcheck=1
gpgkey=http:
//packages
.elastic.co
/GPG-KEY-elasticsearch
enabled=1
安裝elasticsearch
[root@hadoop-node2 ~]
# yum install -y elasticsearch
2) logstash安裝
下載並安裝GPG key
[root@linux-node2 ~]
# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum倉庫
[root@linux-node2 ~]
# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository
for
2.1.x packages
baseurl=http:
//packages
.elastic.co
/logstash/2
.1
/centos
gpgcheck=1
gpgkey=http:
//packages
.elastic.co
/GPG-KEY-elasticsearch
enabled=1
安裝logstash
[root@linux-node2 ~]
# yum install -y logstash
3) kibana安裝
[root@linux-node2 ~]
# cd /usr/local/src
[root@linux-node1 src]
# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@linux-node1 src]
# tar -zvxf kibana-4.3.1-linux-x64.tar.gz
[root@linux-node1 src]
# mv kibana-4.3.1-linux-x64 /usr/local/
[root@linux-node2 src]
# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana
安裝Redis,nginx和java
[root@linux-node2 ~]
# yum install -y redis nginx java
|
4) 管理配置elasticsearch
4.1) 管理linux-node1的elasticsearch
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
修改elasticsearch配置文件,並受權
[root@linux-node1 src]
# grep -n '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
17:cluster.name: chuck-cluster
#判別節點是不是統一集羣
23:node.name: linux-node1
#節點的hostname
33:path.data:
/data/es-data
#數據存放路徑
37:path.logs:
/var/log/elasticsearch/
#日誌路徑
43:bootstrap.mlockall:
true
#鎖住內存,使內存不會再swap中使用
54:network.host: 0.0.0.0
#容許訪問的ip
58:http.port: 9200
#端口
[root@linux-node1 ~]
# mkdir -p /data/es-data
[root@linux-node1 src]
# chown elasticsearch.elasticsearch /data/es-data/
啓動elasticsearch
[root@linux-node1 src]
# systemctl start elasticsearch
[root@linux-node1 src]
# systemctl enable elasticsearch
ln
-s
'/usr/lib/systemd/system/elasticsearch.service'
'/etc/systemd/system/multi-user.target.wants/elasticsearch.service'
[root@linux-node1 src]
# systemctl status elasticsearch
elasticsearch.service - Elasticsearch
Loaded: loaded (
/usr/lib/systemd/system/elasticsearch
.service; enabled)
Active: active (running) since Thu 2016-01-14 09:30:25 CST; 14s ago
Docs: http:
//www
.elastic.co
Main PID: 37954 (java)
CGroup:
/system
.slice
/elasticsearch
.service
└─37954
/bin/java
-Xms256m -Xmx1g -Djava.awt.headless=
true
-XX:+UseParNewGC -XX:+UseConc...
Jan 14 09:30:25 linux-node1 systemd[1]: Starting Elasticsearch...
Jan 14 09:30:25 linux-node1 systemd[1]: Started Elasticsearch.
[root@linux-node1 src]
# netstat -lntup|grep 9200
tcp6 0 0 :::9200 :::* LISTEN 37954
/java
|
訪問9200端口,會把信息顯示出來
elasticsearch進行交互
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
1) 交互的兩種方法
a) Java API :
node client
Transport client
b) RESTful API
Javascript
.NET
php
Perl
Python
Ruby
2) 使用RESTful API進行交互
查看當前索引和分片狀況,稍後會有插件展現
[root@linux-node1 src]
# curl -i -XGET 'http://192.168.56.11:9200/_count?pretty' -d '{
"query"
{
"match_all"
: {}
}
}'
HTTP
/1
.1 200 OK
Content-Type: application
/json
; charset=UTF-8
Content-Length: 95
{
"count"
: 0, 索引0個
"_shards"
: { 分區0個
"total"
: 0,
"successful"
: 0, 成功0個
"failed"
: 0 失敗0個
}
}
使用
head
插件顯示索引和分片狀況
[root@linux-node1 src]
# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
|
在插件中添加一個index-demo/test的索引,提交請求
發送一個GET(固然可使用其餘類型請求)請求,查詢上述索引id
在基本查詢中查看所建索引
4.2) 管理linux-node2的elasticsearch
將linux-node1的配置文件拷貝到linux-node2中,並修改配置文件並受權配置文件中cluster.name的名字必定要一致,當集羣內節點啓動的時候,默認使用組播(多播),尋找集羣中的節點:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
[root@linux-node1 src]
# scp /etc/elasticsearch/elasticsearch.yml 192.168.56.12:/etc/elasticsearch/elasticsearch.yml
[root@linux-node2 elasticsearch]
# sed -i '23s#node.name: linux-node1#node.name: linux-node2#g' elasticsearch.yml
[root@linux-node2 elasticsearch]
# mkdir -p /data/es-data
[root@linux-node2 elasticsearch]
# chown elasticsearch.elasticsearch /data/es-data/
啓動elasticsearch
[root@linux-node2 elasticsearch]
# systemctl enable elasticsearch.service
ln
-s
'/usr/lib/systemd/system/elasticsearch.service'
'/etc/systemd/system/multi-user.target.wants/elasticsearch.service'
[root@linux-node2 elasticsearch]
# systemctl start elasticsearch.service
[root@linux-node2 elasticsearch]
# systemctl status elasticsearch.service
elasticsearch.service - Elasticsearch
Loaded: loaded (
/usr/lib/systemd/system/elasticsearch
.service; enabled)
Active: active (running) since Thu 2016-01-14 02:56:35 CST; 4s ago
Docs: http:
//www
.elastic.co
Process: 38519 ExecStartPre=
/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec
(code=exited, status=0
/SUCCESS
)
Main PID: 38520 (java)
CGroup:
/system
.slice
/elasticsearch
.service
└─38520
/bin/java
-Xms256m -Xmx1g -Djava.awt.headless=
true
-XX:+UseParNewGC -XX:+UseConc...
Jan 14 02:56:35 linux-node2 systemd[1]: Starting Elasticsearch...
Jan 14 02:56:35 linux-node2 systemd[1]: Started Elasticsearch.
在linux-node2配置中添加以下內容,使用單播模式(嘗試了使用組播,可是不生效)
[root@linux-node1 ~]
# grep -n "^discovery" /etc/elasticsearch/elasticsearch.yml
79:discovery.zen.
ping
.unicast.hosts: [
"linux-node1"
,
"linux-node2"
]
[root@linux-node1 ~]
# systemctl restart elasticsearch.service
|
在瀏覽器中查看分片信息,一個索引默認被分紅了5個分片,每份數據被分紅了五個分片(能夠調節分片數量),下圖中外圍帶綠色框的爲主分片,不帶框的爲副本分片,主分片丟失,副本分片會複製一份成爲主分片,起到了高可用的做用,主副分片也可使用負載均衡加快查詢速度,可是若是主副本分片都丟失,則索引就是完全丟失。
4.3) 使用kopf插件監控elasticsearch
1
|
[root@linux-node1 bin]
# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
|
從下圖能夠看出節點的負載,cpu適應狀況,java對內存的使用(heap usage),磁盤使用,啓動時間
除此以外,kopf插件還提供了REST API 等,相似kopf插件的還有bigdesk,可是bigdesk目前還不支持2.1!!安裝bigdesk的方法以下:
1
|
# /usr/share/elasticsearch/bin/plugin install lukas-vlcek/bigdesk
|
4.4) node間組播通訊和分片
當第一個節點啓動,它會組播發現其餘節點,發現集羣名字同樣的時候,就會自動加入集羣。隨便一個節點都是能夠鏈接的,並非主節點才能夠鏈接,鏈接的節點起到的做用只是彙總信息展現:
最初能夠自定義設置分片的個數,分片一旦設置好,就不能夠改變。主分片和副本分片都丟失,數據即丟失,沒法恢復,能夠將無用索引刪除。有些老索引或者不經常使用的索引須要按期刪除,不然會致使es資源剩餘有限,佔用磁盤大,搜索慢等。若是暫時不想刪除有些索引,能夠在插件中關閉索引,就不會佔用內存了。
5) 配置Logstash
5.1) 按部就班學習Logstash
啓動一個logstash, 其中-e:在命令行執行;input輸入,stdin標準輸入,是一個插件;output輸出,stdout:標準輸出
1
2
3
4
5
6
|
[root@linux-node1 bin]
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }' Settings: Default filter workers: 1
Logstash startup completed
chuck ==>輸入
2016-01-14T06:01:07.184Z linux-node1 chuck ==>輸出
www.chuck-blog.com ==>輸入
2016-01-14T06:01:18.581Z linux-node1 www.chuck-blog.com ==>輸出
|
使用rubudebug顯示詳細輸出,codec爲一種編解碼器
1
2
3
4
5
6
7
8
9
10
|
[root@linux-node1 bin]
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
Settings: Default filter workers: 1
Logstash startup completed
chuck ==>輸入
{
"message"
=>
"chuck"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-01-14T06:07:50.117Z"
,
"host"
=>
"linux-node1"
} ==>使用rubydebug輸出
|
上述每一條輸出的內容稱爲一個事件,多個相同的輸出的內容合併到一塊兒稱爲一個事件(舉例:日誌中連續相同的日誌輸出稱爲一個事件)! 使用logstash將信息寫入到elasticsearch
1
2
3
4
5
6
7
|
[root@linux-node1 bin]
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.56.11:9200"] } }'
Settings: Default filter workers: 1
Logstash startup completed
maliang
chuck
chuck-blog.com
www.chuck-blog.com
|
在elasticsearch中查看logstash新加的索引
在elasticsearch中寫一份,同時在本地輸出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定時備份到遠端一份了。此處使用的保留文本文件三大優點:1)文本最簡單; 2) 文本能夠二次加工; 3) 文本的壓縮比最高;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[root@linux-node1 bin]
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { huihuisearch { hosts => ["192.168.56.11:9200"] } stdout{ codec => rubydebug } }'
Settings: Default filter workers: 1
Logstash startup completed
www.shibo.com
{
"message"
=>
"www.shibo.com"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-01-14T06:27:49.014Z"
,
"host"
=>
"linux-node1"
}
www.huihui.co
{
"message"
=>
"www.huihui.co"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-01-14T06:27:58.058Z"
,
"host"
=>
"linux-node1"
}
|
使用logstash啓動一個配置文件,會在elasticsearch中寫一份
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[root@linux-node1 ~]
# cat normal.conf
input { stdin { } }
output {
elasticsearch { hosts => [
"localhost:9200"
] }
stdout { codec => rubydebug }
}
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f normal.conf
Settings: Default filter workers: 1
Logstash startup completed
123
{
"message"
=>
"123"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-01-14T06:51:13.411Z"
,
"host"
=> "linux-node1
|
5.2) 學習編寫conf格式
- 輸入插件配置,此處以file爲例,能夠設置多個
1
2
3
4
5
6
7
8
9
10
|
input {
file
{
path =>
"/var/log/messages"
type
=>
"syslog"
}
file
{
path =>
"/var/log/apache/access.log"
type
=>
"apache"
}
}
|
- 介紹幾種收集文件的方式,可使用數組方式或者用*匹配,也能夠寫多個path
1
2
|
path => [
"/var/log/messages"
,
"/var/log/*.log"
]
path => [
"/data/mysql/mysql.log"
]
|
- 設置boolean值
1
|
ssl_enable =>
true
|
- 文件大小單位
1
2
3
4
|
my_bytes =>
"1113"
# 1113 bytes
my_bytes =>
"10MiB"
# 10485760 bytes
my_bytes =>
"100kib"
# 102400 bytes
my_bytes =>
"180 mb"
# 180000000 bytes
|
- jason收集
1
|
codec => 「json」
|
- hash收集
1
2
3
4
5
|
match => {
"field1"
=>
"value1"
"field2"
=>
"value2"
...
}
|
- 端口
1
|
port => 33
|
- 密碼
1
|
my_password =>
"password"
|
5.3) 學習編寫input的file插件
input插件之input
sincedb_path:記錄logstash讀取位置的路徑
start_postion :包括beginning和end,指定收集的位置,默認是end,從尾部開始
add_field: 加一個域
discover_internal: 發現間隔,每隔多久收集一次,默認15秒
5.4) 學習編寫output的file插件
5.5) 經過input和output插件編寫conf文件
- 收集系統日誌的conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@linux-node1 ~]
# cat system.conf
input {
file
{
path =>
"/var/log/messages"
type
=>
"system"
start_position =>
"beginning"
}
}
output {
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"system-%{+YYYY.MM.dd}"
}
}
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f system.conf
|
- 收集elasticsearch的error日誌
此處把上個system日誌和這個error(java程序日誌)日誌,放在一塊兒。使用if判斷,兩種日誌分別寫到不一樣索引中.此處的type(固定的就是type,不可更改)不能夠和日誌格式的任何一個域(能夠理解爲字段)的名稱重複,也就是說日誌的域不能夠有type這個名稱。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
[root@linux-node1 ~]
# cat all.conf
input {
file
{
path =>
"/var/log/messages"
type
=>
"system"
start_position =>
"beginning"
}
file
{
path =>
"/var/log/elasticsearch/chuck-cluster.log"
type
=>
"es-error"
start_position =>
"beginning"
}
}
output {
if
[
type
] ==
"system"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"system-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"es-error"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"es-error-%{+YYYY.MM.dd}"
}
}
}
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f all.conf
|
5.6) 把多行整個報錯收集到一個事件中
舉例說明
以at.org開頭的內容都屬於同一個事件,可是顯示在不一樣行,這樣的日誌格式看起來很不方便,因此須要把他們合併到一個事件中.
- 引入codec的multiline插件
1
2
3
4
5
6
7
8
9
|
input {
stdin {
codec => multiline {
`pattern =>
"pattern, a regexp"
negate =>
"true"
or
"false"
what =>
"previous"
or
"next"
`
}
}
}
|
regrxp:使用正則,什麼狀況下把多行合併起來
negate: 正向匹配和反向匹配
what: 合併到當前行仍是下一行
在標準輸入和標準輸出中測試以證實多行收集到一個日誌成功
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
[root@linux-node1 ~]
# cat muliline.conf
input {
stdin {
codec => multiline {
pattern =>
"^\["
negate =>
true
what =>
"previous"
}
}
}
output {
stdout {
codec =>
"rubydebug"
}
}
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f muliline.conf
Settings: Default filter workers: 1
Logstash startup completed
[1
[2
{
"@timestamp"
=>
"2016-01-15T06:46:10.712Z"
,
"message"
=>
"[1"
,
"@version"
=>
"1"
,
"host"
=>
"linux-node1"
}
chuck
chuck-blog.com
123456
[3
{
"@timestamp"
=>
"2016-01-15T06:46:16.306Z"
,
"message"
=>
"[2\nchuck\nchuck-blog\nchuck-blog.com\n123456"
,
"@version"
=>
"1"
,
"tags"
=> [
[0]
"multiline"
],
"host"
=>
"linux-node1"
|
繼續將上述實驗結果放到all.conf的es-error索引中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
[root@linux-node1 ~]
# cat all.conf
input {
file
{
path =>
"/var/log/messages"
type
=>
"system"
start_position =>
"beginning"
}
file
{
path =>
"/var/log/elasticsearch/chuck-clueser.log"
type
=>
"es-error"
start_position =>
"beginning"
codec => multiline {
pattern =>
"^\["
negate =>
true
what =>
"previous"
}
}
}
output {
if
[
type
] ==
"system"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"system-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"es-error"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"es-error-%{+YYYY.MM.dd}"
}
}
}
|
6) 瞭解 Kibana
6.1) 編輯kinaba配置文件使之生效
1
2
3
4
5
|
[root@linux-node1 ~]
# grep '^[a-Z]' /usr/local/kibana/config/kibana.yml
server.port: 5601
#kibana端口
server.host:
"0.0.0.0"
#對外服務的主機
elasticsearch.url:
"http://192.168.56.11:9200"
#訪問elasticsearch的地址
kibana.index: ".kibana
#在elasticsearch中添加.kibana索引
|
開啓一個screen,並啓動kibana
1
2
3
4
|
[root@linux-node1 ~]
# screen
[root@linux-node1 ~]
# /usr/local/kibana/bin/kibana
注意: 使用crtl +a+d退出
screen
|
6.2) 驗證error的muliline插件生效
在kibana中添加一個es-error索引
能夠看到默認的字段
選擇discover查看
驗證error的muliline插件生效
7) Logstash收集nginx、syslog和tcp日誌
7.1) 收集nginx的訪問日誌
在這裏使用codec的json插件將日誌的域進行分段,使用key-value的方式,使日誌格式更清晰,易於搜索,還能夠下降cpu的負載
更改nginx的配置文件的日誌格式,使用json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@linux-node1 ~]
# sed -n '15,33p' /etc/nginx/nginx.conf
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
;
log_format json
'{ "@timestamp": "$time_local", '
'"@fields": { '
'"remote_addr": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"request": "$request", '
'"request_method": "$request_method", '
'"http_referrer": "$http_referer", '
'"body_bytes_sent":"$body_bytes_sent", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"http_user_agent": "$http_user_agent" } }'
;
# access_log /var/log/nginx/access_json.log main;
access_log
/var/log/nginx/access
.log json;
|
啓動nginx
1
2
3
4
5
6
7
|
[root@linux-node1 ~]
# nginx -t
nginx: the configuration
file
/etc/nginx/nginx
.conf syntax is ok
nginx: configuration
file
/etc/nginx/nginx
.conf
test
is successful
[root@linux-node1 ~]
# nginx
[root@linux-node1 ~]
# netstat -lntup|grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 43738
/nginx
: master
tcp6 0 0 :::80 :::* LISTEN 43738
/nginx
: master
|
日誌格式顯示以下
使用logstash將nginx訪問日誌收集起來,繼續寫到all.conf中
將nginx-log加入kibana中並顯示
7.2) 收集系統syslog日誌
前文中已經使用文件file的形式收集了系統日誌/var/log/messages,可是實際生產環境是須要使用syslog插件直接收集
修改syslog的配置文件,把日誌信息發送到514端口上
1
2
|
[root@linux-node1 ~]
# vim /etc/rsyslog.conf
90 *.* @@192.168.56.11:514
|
將system-syslog放到all.conf中,啓動all.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
[root@linux-node1 ~]
# cat all.conf
input {
syslog {
type
=>
"system-syslog"
host =>
"192.168.56.11"
port =>
"514"
}
file
{
path =>
"/var/log/messages"
type
=>
"system"
start_position =>
"beginning"
}
file
{
path =>
"/var/log/nginx/access_json.log"
codec => json
start_position =>
"beginning"
type
=>
"nginx-log"
}
file
{
path =>
"/var/log/elasticsearch/chuck-cluster.log"
type
=>
"es-error"
start_position =>
"beginning"
codec => multiline {
pattern =>
"^\["
negate =>
true
what =>
"previous"
}
}
}
output {
if
[
type
] ==
"system"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"system-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"es-error"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"es-error-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"nginx-log"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"nginx-log-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"system-syslog"
{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"system-syslog-%{+YYYY.MM.dd}"
}
}
}
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f all.conf
|
在elasticsearch插件中就可見到增長的system-syslog索引
7.3) 收集tcp日誌
編寫tcp.conf
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@linux-node1 ~]
# cat tcp.conf
input {
tcp {
host =>
"192.168.56.11"
port =>
"6666"
}
}
output {
stdout {
codec =>
"rubydebug"
}
}
|
使用nc對6666端口寫入數據
1
|
[root@linux-node1 ~]
# nc 192.168.56.11 6666 </var/log/yum.log
|
將信息輸入到tcp的僞設備中
1
|
[root@linux-node1 ~]
# echo "chuck" >/dev/tcp/192.168.56.11/6666
|
8) Logstash解耦之消息隊列
8.1) 圖解使用消息隊列架構
上圖中,數據流向:
- 數據源Datasource把數據寫到input插件中;
- output插件使用消息隊列把消息寫入到消息隊列Message Queue中
- Logstash indexing Instance啓動logstash使用input插件讀取消息隊列中的信息
- Fliter插件過濾後在使用output寫入到elasticsearch中。若是生產環境中不適用正則grok匹配,能夠寫Python腳本從消息隊列中讀取信息,輸出到elasticsearch中.
8.2 )上圖架構的優勢
- 解耦,鬆耦合
- 解除了因爲網絡緣由不能直接連elasticsearch的狀況
- 方便架構演變,增長新內容
- 消息隊列可使用rabbitmq,zeromq等,也可使用redis,kafka(消息不刪除,可是比較重量級)等.
9) 引入redis到架構中
9.1) 使用redis收集logstash的信息
修改redis的配置文件並啓動redis
1
2
3
4
5
6
|
[root@linux-node1 ~]
# vim /etc/redis.conf
37 daemonize
yes
65 bind 192.168.56.11
[root@linux-node1 ~]
# systemctl start redis
[root@linux-node1 ~]
# netstat -lntup|grep 6379
tcp 0 0 192.168.56.11:6379 0.0.0.0:* LISTEN 45270
/redis-server
|
編寫redis.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[root@linux-node1 ~]
# cat redis-out.conf
input{
stdin{
}
}
output{
redis{
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
#數據類型爲list
key =>
"demo"
}
|
啓動配置文件輸入信息
1
2
3
4
5
|
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f redis-out.conf
Settings: Default filter workers: 1
Logstash startup completed
chuck
chuck-blog
|
使用redis-cli鏈接到redis並查看輸入的信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
|
[root@linux-node1 ~]
# redis-cli -h 192.168.56.11
192.168.56.11:6379> info
#輸入info查看信息
# Server
redis_version:2.8.19
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:c0359e7aa3798aa2
redis_mode:standalone
os:Linux 3.10.0-229.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.3
process_id:45270
run_id:83f428b96e87b7354249fe42bd19ee8a8643c94e
tcp_port:6379
uptime_in_seconds:1111
uptime_in_days:0
hz:10
lru_clock:10271973
config_file:
/etc/redis
.conf
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:832048
used_memory_human:812.55K
used_memory_rss:5193728
used_memory_peak:832048
used_memory_peak_human:812.55K
used_memory_lua:35840
mem_fragmentation_ratio:6.24
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1453112484
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:2
total_commands_processed:2
instantaneous_ops_per_sec:0
total_net_input_bytes:164
total_net_output_bytes:9
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:9722
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:1.95
used_cpu_user:0.40
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Keyspace
db6:keys=1,expires=0,avg_ttl=0
192.168.56.11:6379>
select
6
#選擇db6
OK
192.168.56.11:6379[6]> keys *
#選擇demo這個key
1)
"demo"
192.168.56.11:6379[6]> LINDEX demo -2
#查看消息
"{\"message\":\"chuck\",\"@version\":\"1\",\"@timestamp\":\"2016-01-18T10:21:23.583Z\",\"host\":\"linux-node1\"}"
192.168.56.11:6379[6]> LINDEX demo -1
#查看消息
"{\"message\":\"chuck-blog\",\"@version\":\"1\",\"@timestamp\":\"2016-01-18T10:25:54.523Z\",\"host\":\"linux-node1\"}"
|
爲了下一步寫input插件到把消息發送到elasticsearch中,多在redis中寫入寫數據
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f redis-out.conf
Settings: Default filter workers: 1
Logstash startup completed
chuck
chuck-blog
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
|
查看redis中名字爲demo的key長度
1
2
|
192.168.56.11:6379[6]> llen demo
(integer) 28
|
9.2) 使用redis發送消息到elasticsearch中
編寫redis-in.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@linux-node1 ~]
# cat redis-in.conf
input{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"demo"
}
}
output{
elasticsearch {
hosts => [
"192.168.56.11:9200"
]
index =>
"redis-demo-%{+YYYY.MM.dd}"
}
}
|
啓動配置文件
1
2
3
|
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f redis-in.conf
Settings: Default filter workers: 1
Logstash startup completed
|
不斷刷新demo這個key的長度(讀取很快,刷新必定要速度)
1
2
3
4
5
6
7
8
9
10
|
192.168.56.11:6379[6]> llen demo
(integer) 28
192.168.56.11:6379[6]> llen demo
(integer) 28
192.168.56.11:6379[6]> llen demo
(integer) 19
#能夠看到redis的消息正在寫入到elasticsearch中
192.168.56.11:6379[6]> llen demo
(integer) 7
#能夠看到redis的消息正在寫入到elasticsearch中
192.168.56.11:6379[6]> llen demo
(integer) 0
|
在elasticsearch中查看增長了redis-demo
9.3) 將all.conf的內容改成經由redis
編寫shipper.conf做爲redis收集logstash配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
|
[root@linux-node1 ~]
# cp all.conf shipper.conf
[root@linux-node1 ~]
# vim shipper.conf
input {
syslog {
type
=>
"system-syslog"
host =>
"192.168.56.11"
port =>
"514"
}
tcp {
type
=>
"tcp-6666"
host =>
"192.168.56.11"
port =>
"6666"
}
file
{
path =>
"/var/log/messages"
type
=>
"system"
start_position =>
"beginning"
}
file
{
path =>
"/var/log/nginx/access_json.log"
codec => json
start_position =>
"beginning"
type
=>
"nginx-log"
}
file
{
path =>
"/var/log/elasticsearch/chuck-cluster.log"
type
=>
"es-error"
start_position =>
"beginning"
codec => multiline {
pattern =>
"^\["
negate =>
true
what =>
"previous"
}
}
}
output {
if
[
type
] ==
"system"
{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"system"
}
}
if
[
type
] ==
"es-error"
{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"es-error"
}
}
if
[
type
] ==
"nginx-log"
{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"nginx-log"
}
}
if
[
type
] ==
"system-syslog"
{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"system-syslog"
}
}
if
[
type
] ==
"tcp-6666"
{
redis {
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"tcp-6666"
}
}
}
|
在redis中查看keys
1
2
3
4
5
6
|
192.168.56.11:6379[6]>
select
6
OK
192.168.56.11:6379[6]> keys *
1)
"system"
2)
"nginx-log"
3)
"tcp-6666"
|
編寫indexer.conf做爲redis發送elasticsearch配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
|
[root@linux-node1 ~]
# cat indexer.conf
input {
redis {
type
=>
"system-syslog"
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"system-syslog"
}
redis {
type
=>
"tcp-6666"
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"tcp-6666"
}
redis {
type
=>
"system"
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"system"
}
redis {
type
=>
"nginx-log"
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"nginx-log"
}
redis {
type
=>
"es-error"
host =>
"192.168.56.11"
port =>
"6379"
db =>
"6"
data_type =>
"list"
key =>
"es-error"
}
}
output {
if
[
type
] ==
"system"
{
elasticsearch {
hosts =>
"192.168.56.11"
index =>
"system-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"es-error"
{
elasticsearch {
hosts =>
"192.168.56.11"
index =>
"es-error-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"nginx-log"
{
elasticsearch {
hosts =>
"192.168.56.11"
index =>
"nginx-log-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"system-syslog"
{
elasticsearch {
hosts =>
"192.168.56.11"
index =>
"system-syslog-%{+YYYY.MM.dd}"
}
}
if
[
type
] ==
"tcp-6666"
{
elasticsearch {
hosts =>
"192.168.56.11"
index =>
"tcp-6666-%{+YYYY.MM.dd}"
}
}
}
|
啓動shipper.conf
1
2
|
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f shipper.conf
Settings: Default filter workers: 1
|
因爲日誌量小,很快就會所有被髮送到elasticsearch,key也就沒了,因此多寫寫數據到日誌中
1
2
3
|
[root@linux-node1 ~]
# for n in `seq 10000` ;do echo $n >>/var/log/elasticsearch/chuck-cluster.log;done
[root@linux-node1 ~]
# for n in `seq 10000` ;do echo $n >>/var/log/nginx/access_json.log;done
[root@linux-node1 ~]
# for n in `seq 10000` ;do echo $n >>/var/log/messages;done
|
查看key的長度看到key在增加
1
2
3
4
5
6
7
|
(integer) 2481
192.168.56.11:6379[6]> llen system
(integer) 2613
192.168.56.11:6379[6]> llen system
(integer) 2795
192.168.56.11:6379[6]> llen system
(integer) 2960
|
啓動indexer.conf
1
2
3
|
[root@linux-node1 ~]
# /opt/logstash/bin/logstash -f indexer.conf
Settings: Default filter workers: 1
Logstash startup completed
|
查看key的長度看到key在減少
1
2
3
4
5
6
7
8
9
10
11
12
|
192.168.56.11:6379[6]> llen nginx-log
(integer) 9680
192.168.56.11:6379[6]> llen nginx-log
(integer) 9661
192.168.56.11:6379[6]> llen nginx-log
(integer) 9661
192.168.56.11:6379[6]> llen system
(integer) 9591
192.168.56.11:6379[6]> llen system
(integer) 9572
192.168.56.11:6379[6]> llen system
(integer) 9562
|
kibana查看nginx-log索引
10) 學習logstash的fliter插件
10.1) 熟悉grok
前面介紹了input和output插件,在這裏學習fliter插件
filter插件有不少,在這裏就學習grok插件,使用正則匹配日誌裏的域來拆分。在實際生產中,apache日誌不支持jason,就只能使用grok插件匹配;mysql慢查詢日誌也是沒法拆分,只能石油grok正則表達式匹配拆分。
在以下連接,github上有不少寫好的grok模板,能夠直接引用https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns 在裝好的logstash中也會有grok匹配規則,直接能夠引用,路徑以下:
1
2
|
[root@linux-node1 patterns]
# pwd
/opt/logstash/vendor/bundle/jruby/1
.9
/gems/logstash-patterns-core-2
.0.2
/patterns
|
10.2) 根據官方文檔提供而編寫的grok.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[root@linux-node1 ~]
# cat grok.conf
input {
stdin {}
}
filter {
grok {
match => {
"message"
=>
"%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"
}
}
}
output {
stdout {
codec =>
"rubydebug"
}
}
|
啓動logstash,並根據官方文檔提供輸入,可獲得拆分結果以下顯示
10.3) 使用logstash收集mysql慢查詢日誌
倒入生產中mysql的slow日誌,示例格式以下:
1
2
3
4
5
6
7
|
# Time: 160108 15:46:14
# User@Host: dev_select_user[dev_select_user] @ [192.168.97.86] Id: 714519
# Query_time: 1.638396 Lock_time: 0.000163 Rows_sent: 40 Rows_examined: 939155
SET timestamp=1452239174;
SELECT DATE(create_time) as day,HOUR(create_time) as h,round(avg(low_price),2) as low_price
FROM t_actual_ad_num_log WHERE create_time>=
'2016-01-07'
and ad_num<=10
GROUP BY DATE(create_time),HOUR(create_time);
|
使用multiline處理,並編寫slow.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
[root@linux-node1 ~]
# cat mysql-slow.conf
input{
file
{
path =>
"/root/slow.log"
type
=>
"mysql-slow-log"
start_position =>
"beginning"
codec => multiline {
pattern =>
"^# User@Host:"
negate =>
true
what =>
"previous"
}
}
}
filter {
# drop sleep events
grok {
match => {
"message"
=>
"SELECT SLEEP"
}
add_tag => [
"sleep_drop"
]
tag_on_failure => []
# prevent default _grokparsefailure tag on real records
}
if
"sleep_drop"
in
[tags] {
drop {}
}
grok {
match => [
"message"
,
"(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*"
]
}
date
{
match => [
"timestamp"
,
"UNIX"
]
remove_field => [
"timestamp"
]
}
}
output {
stdout{
codec =>
"rubydebug"
}
}
|
執行該配置文件,查看grok正則匹配結果
11) 生產如何上線ELK
11.1) 日誌分類
1
2
3
4
5
6
|
系統日誌 rsyslog logstash syslog插件
訪問日誌 nginx logstash codec json
錯誤日誌
file
logstash
file
+ mulitline
運行日誌
file
logstash codec json
設備日誌 syslog logstash syslog插件
debug日誌
file
logstash json or mulitline
|
10.2) 日誌標準化
1
2
|
- 路徑固定標準化
- 格式儘可能使用json
|