添加一個統計nginx日誌狀態碼的餅圖。html
https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-syslog.html前端
注意java
[root@elk01-node2 ~]# vim /etc/rsyslog.conf # 修改成以下: 90 *.* @@10.0.0.204:514 # 第一個* 日誌類型 # 第二個* 日誌級別 # 修改以後,重啓便可。 [root@elk01-node2 ~]# systemctl restart rsyslog
[root@elk01-node2 conf.d]# cat rsyslog.conf input { syslog { type => "rsyslog" port => 514 } } filter { } output { stdout { codec => "rubydebug" } }
啓動到前臺,而後進行測試node
# [root@elk01-node2 conf.d]# /opt/logstash/bin/logstash -f ./rsyslog.conf Settings: Default pipeline workers: 4 Pipeline main started { "message" => "[system] Activating service name='org.freedesktop.problems' (using servicehelper)\n", "@version" => "1", "@timestamp" => "2017-05-04T19:31:44.000Z", "type" => "rsyslog", "host" => "10.0.0.204", "priority" => 29, "timestamp" => "May 4 15:31:44", "logsource" => "elk01-node2", "program" => "dbus", "pid" => "900", "severity" => 5, "facility" => 3, "facility_label" => "system", "severity_label" => "Notice" ........... ...........
能夠經過logger 命令產生測試日誌。
nginx
[root@elk01-node2 conf.d]# logger wangfei ....... } { "message" => "wangfei\n", "@version" => "1", "@timestamp" => "2017-05-04T19:43:19.000Z", "type" => "rsyslog", "host" => "10.0.0.204", "priority" => 13, "timestamp" => "May 4 15:43:19", "logsource" => "elk01-node2", "program" => "root", "severity" => 5, "facility" => 1, "facility_label" => "user-level", "severity_label" => "Notice" }
[root@elk01-node2 conf.d]# cat rsyslog.conf input { syslog { type => "rsyslog" port => 514 } } filter { } output { #stdout { # codec => "rubydebug" #} if [type] == "rsyslog" { elasticsearch { hosts => ["10.0.0.204:9200"] # 這裏經過月來創建索引,不能經過日,不然那樣會創建太多的索引,何況也不須要對系統日誌天天都創建一個索引。 index => "rsyslog-%{+YYYY.MM}" } } }
ps:記得經過logger搞測試日誌,而後才能到es裏看到新建的索引
web
略正則表達式
https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-tcp.htmlredis
適用場景apache
[root@elk01-node2 conf.d]# cat tcp.conf input { tcp { port => 6666 # 指定的端口,能夠本身定義。 type => "tcp" mode => "server" # 模式,默認就是server } } filter { } output { stdout { codec => rubydebug } }
用nc命令來將要發送的日誌發給logstash tcp服務端json
方法1 [root@web01-node1 ~]# echo wf |nc 10.0.0.204 6666 方法2 經過文件輸入重定向 [root@web01-node1 ~]# nc 10.0.0.204 6666 </etc/sysctl.conf 方法3 經過僞設備 [root@web01-node1 ~]# echo "wangfei">/dev/tcp/10.0.0.204/6666
做用:對獲取到得日誌內容,進行字段拆分。
列如:nginx能夠將日誌寫成json格式,apache的日誌就不行,須要使用grok模塊來搞。
注意:
https://www.elastic.co/guide/en/logstash/2.3/plugins-filters-grok.html
root@elk01-node2 conf.d]# cat grok.conf input { stdin {} } filter { grok { match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" } } } output{ stdout { codec => rubydebug } }
輸入以下測試日誌:
55.3.244.1 GET /index.html 15824 0.043
效果
{ "message" => "55.3.244.1 GET /index.html 15824 0.043", "@version" => "1", "@timestamp" => "2017-05-04T23:59:15.836Z", "host" => "elk01-node2.damaiche.org-204", "client" => "55.3.244.1", "method" => "GET", "request" => "/index.html", "bytes" => "15824", "duration" => "0.043" } { "message" => "", "@version" => "1", "@timestamp" => "2017-05-04T23:59:16.000Z", "host" => "elk01-node2.damaiche.org-204", "tags" => [ [0] "_grokparsefailure" ] }
打印到前臺
[root@elk01-node2 conf.d]# cat grok.conf input { file { path => "/var/log/httpd/access_log" start_position => "beginning" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } output{ stdout { codec => rubydebug } }
產生測試日誌
[root@elk01-node2 httpd]# ab -n 10 http://10.0.0.204:8081/
存到es裏
[root@elk01-node2 conf.d]# cat grok.conf input { file { path => "/var/log/httpd/access_log" start_position => "beginning" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } output{ #stdout { # codec => rubydebug #} elasticsearch { hosts => ["10.0.0.204:9200"] index => "httpd-log-%{+YYYY.MM.dd}" } }
啓動到前臺,而後產生測試日誌。
效果
https://www.elastic.co/guide/en/logstash/2.3/deploying-and-scaling.html
沒有消息隊列的架構,ex系統一掛,就玩玩。
數據-logstash-es
解耦後架構。加上mq後的架構,es掛了,還有redis,數據啥的都不會丟失。
data - logstash - mq -logstash -es
input的redis
https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-redis.html#plugins-inputs-redis
output的redis
https://www.elastic.co/guide/en/logstash/2.3/plugins-outputs-redis.html
說明
web01-node1.damaiche.org-203 logstash 收集數據
elk01-node2.damaiche.org-204 logstash(indexer) + kibana + es + redis
機器:elk01-node2.damaiche.org-204
yum -y install redis
修改配置文件
vim /etc/redis.conf 61 bind 10.0.0.204 127.0.0.1 128 daemonize yes
啓動redis
systemctl restart redis
測試redis是否能正常寫入
redis-cli -h 10.0.0.204 -p 6379 10.0.0.204:6379> set name hehe OK 10.0.0.204:6379> get name "hehe"
機器:web01-node1.damaiche.org-203
[root@web01-node1 conf.d]#cat redis.conf input { stdin{} file { path => "/var/log/httpd/access_log" start_position => "beginning" } } output { redis { host => ['10.0.0.204'] port => '6379' db => "6" key => "apache_access_log" data_type => "list" } }
登陸redis查看
10.0.0.204:6379> info # 查看redis詳細的信息 ...... # Keyspace db0:keys=1,expires=0,avg_ttl=0 db6:keys=2,expires=0,avg_ttl=0 10.0.0.204:6379> select 6 # 選擇key OK 10.0.0.204:6379[6]> keys * # 查看全部的key內容,生產環境禁止操做 1) "apache_access_log" 2) "demo" 10.0.0.204:6379[6]> type apache_access_log # 查看key的類型 list 10.0.0.204:6379[6]> llen apache_access_log # 查看key的長度 (integer) 10 10.0.0.204:6379[6]> lindex apache_access_log -1 # 查看最後一行的內容,有得時候程序有問題,想看最後一行的日誌信息,那麼就能夠這麼查看。 "{\"message\":\"10.0.0.204 - - [04/May/2017:21:07:17 -0400] \\\"GET / HTTP/1.0\\\" 403 4897 \\\"-\\\" \\\"ApacheBench/2.3\\\"\",\"@version\":\"1\",\"@timestamp\":\"2017-05-05T01:07:17.422Z\",\"path\":\"/var/log/httpd/access_log\",\"host\":\"elk01-node2.damaiche.org-204\"}" 10.0.0.204:6379[6]> 啓動到前臺
機器:elk01-node2.damaiche.org-204
測試可否正常從redis讀取數據(先不要寫到es)
root@elk01-node2 conf.d]# cat indexer.conf input { redis { host => ['10.0.0204'] port => '6379' db => "6" key => "apache_access_log" data_type => "list" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } output{ stdout { codec => rubydebug } }
造數據
[root@elk01-node2 conf.d]# ab -n 100 -c 1 http://10.0.0.203:8081/
此時到redis查看key access_apache_log的長度
10.0.0.204:6379[6]> llen apache_access_log (integer) 100
將寫好的indexer.conf啓動到前臺,此時再查看redis apache_access_log的長度
10.0.0.204:6379[6]> llen apache_access_log (integer) 0 長度爲0,說明數據從redis裏都取走了
此時將數據寫入到es裏
[root@elk01-node2 conf.d]# cat indexer.conf input { redis { host => ['10.0.0204'] port => '6379' db => "6" key => "apache_access_log" data_type => "list" } } filter { # 若是收集多個日誌文件,那麼這裏必定要有type來進行判斷,不然全部的日誌都會來進行匹配。grok原本就性能很差,這麼一搞,機器就死掉了。 grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } output{ elasticsearch { hosts => ["10.0.0.204:9200"] index => "apache-access-log-%{+YYYY.MM.dd}" } }
效果
訪問日誌:apache訪問日誌,nginx訪問日誌,tomcat (file > filter)
錯誤日誌: java日誌。
系統日誌:/var/log/* syslog rsyslog
運行日誌:程序寫的(json格式)
網絡日誌:防火牆、交換機、路由器。。。
標準化:日誌是什麼格式的 json ?怎麼命名?存放在哪裏?/data/logs/? 日誌怎麼切割?按天?按小時?用什麼工具切割?腳本+crontab ?
工具化:如何使用logstash進行收集方案?
收集日誌:nginx訪問日誌、apache訪問日誌、es日誌、系統message日誌
角色說明:
10.0.0.203 web01-node1.damaiche.org-203 logstash 收集數據
10.0.0.204 elk01-node2.damaiche.org-204 logstash(indexer) + kibana + es + redis
[root@web01-node1 conf.d]# cat shipper.conf input{ file { path => "/var/log/nginx/access.log" start_position => "beginning" type => "nginx-access01-log" } file { path => "/var/log/httpd/access_log" start_position => "beginning" type => "apache-access01-log" } file { path => "/var/log/elasticsearch/myes.log" start_position => "beginning" type => "myes01-log" codec => multiline { pattern => "^\[" negate => "true" what => "previous" } } file { path => "/var/log/messages" start_position => "beginning" type => "messages01" } } output{ if [type] == "nginx-access01-log" { redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "nginx-access01-log" } } if [type] == "apache-access01-log" { redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "apache-access01-log" } } if [type] == "myes01-log" { redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "myes01-log" } } if [type] == "messages01" { redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "messages01" } } } [root@web01-node1 conf.d]# /etc/init.d/logstash start
注:
登陸redis查看各個key是否正常生成,以及key的長度。
info .... # Keyspace db0:keys=1,expires=0,avg_ttl=0 db3:keys=4,expires=0,avg_ttl=0 db6:keys=5,expires=0,avg_ttl=0 10.0.0.204:6379> select 3 OK 10.0.0.204:6379[3]> keys * 1) "messages01" 2) "myes01-log" 3) "apache-access01-log" 4) "nginx-access01-log" 10.0.0.204:6379[3]> llen messages01 (integer) 20596 10.0.0.204:6379[3]> llen myes01-log (integer) 2336 10.0.0.204:6379[3]> llen apache-access01-log (integer) 92657 10.0.0.204:6379[3]> llen nginx-access01-log (integer) 100820
[root@elk01-node2 conf.d]# cat indexer.conf input { redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "nginx-access01-log" } redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "apache-access01-log" } redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "myes01-log" } redis { host => "10.0.0.204" port => "6379" db => "3" data_type => "list" key => "messages01" } } filter { if [type] == "apache-access01-log" { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } } output { if [type] == "nginx-access01-log" { elasticsearch { hosts => ["10.0.0.204:9200"] index => "nginx-access01-log-%{+YYYY.MM.dd}" } } if [type] == "apache-access01-log" { elasticsearch { hosts => ["10.0.0.204:9200"] index => "apache-access01-log-%{+YYYY.MM.dd}" } } if [type] == "myes01-log" { elasticsearch { hosts => ["10.0.0.204:9200"] index => "myes01-log-%{+YYYY.MM.dd}" } } if [type] == "messages01" { elasticsearch { hosts => ["10.0.0.204:9200"] index => "messages01-%{+YYYY.MM}" } } }
若是使用redis list做爲elkstack的消息隊列,須要對全部的list key的長度進行監控。根據實際狀況,例如超過'10w'就報警。