本文將將介紹經過logstash用來收集mysql的慢查詢日誌,而後推送給elasticsearch,並建立自定義的索引,最終經過kibana進行web展現。mysql
環境介紹:web
操做系統版本:centos6.6 64bitsql
Mysql版本: mysql5.6.17與mysql5.1.36centos
Logstash版本: logstash-2.0.0.tar.gzruby
Elasticsearch版本:elasticsearch-2.1.0.tar.gzelasticsearch
Kibana版本:Kibana 4.2.1ide
Java版本:1.8.0_45測試
一:mysql5.1.36版本spa
1:配置mysql5.1.36版本慢查詢日誌,這裏爲了測試,將查詢時間超過0.1s的均記錄到慢查詢日誌中操作系統
mysql> show variables like '%slow%'; mysql> show variables like '%long%';
2:配置logstash
# cat /usr/local/logstash/etc/logstach.conf input { file { type => "mysql-slow" path => "/mydata/slow-query.log" codec => multiline { pattern => "^# User@Host:" negate => true what => "previous" } } } #input節的配置定義了輸入的日誌類型爲mysql慢查詢日誌類型以及日誌路徑,採用合併多行數據。negate字段是一個選擇開關,能夠正向匹配和反向匹配 filter { # drop sleep events grok { match => { "message" => "SELECT SLEEP" } add_tag => [ "sleep_drop" ] tag_on_failure => [] # prevent default _grokparsefailure tag on real records } if "sleep_drop" in [tags] { drop {} } #filter節的配置定義了過濾mysql查詢爲sleep狀態SQL語句 grok { match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n# Time:.*$" ] } date { match => [ "timestamp", "UNIX" ] remove_field => [ "timestamp" ] } } #grok節定義了慢查詢日誌輸出的正則切割,這個容易頭暈眼花! output { stdout { codec => rubydebug {} } elasticsearch { hosts => "192.168.1.226:9200" index => "mysql-server81-%{+YYYY.MM.dd}" } } #output節定義了輸出,這裏除了打印到屏幕上以外,還輸入到elasticsearch,同時起一個自定義的索引名稱
3:啓動測試
# /usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/logstach.conf
# tail -f /mydata/slow-query.log
二:mysql5.6.17版本
因爲mysql5.6.17版本的的slowlog多了一個id字段,因此須要調整grok節的正則配置。
Mysql5.1.36的slowlog:
# tail -f /mydata/slow-query.log # Time: 151202 17:29:24 # User@Host: root[root] @ [192.168.1.156] # Query_time: 6.578696 Lock_time: 0.000039 Rows_sent: 999424 Rows_examined: 999424 SET timestamp=1449048564; select * from users_test;
Mysql5.6.17的slowlog:對比mysql5.1.36版本的慢查詢日誌輸出,多了Id: 84589。
# tail -f /mydata/slow-query.log # Time: 151202 16:09:54 # User@Host: root[root] @ [192.168.1.156] Id: 84589 # Query_time: 7.089324 Lock_time: 0.000112 Rows_sent: 1 Rows_examined: 33554432 SET timestamp=1449043794; select count(*) from t1;
這裏順便說一下,以前還測試了Percona Server 5.5.34版本,發現慢查詢日誌多了Thread_id,Schema,Last_errno,Killed 4個字段。
# tail -f /mydata5.5/slow-query.log # User@Host: root[root] @ [192.168.1.228] # Thread_id: 1164217 Schema: mgr Last_errno: 0 Killed: 0 # Query_time: 0.371185 Lock_time: 0.000056 Rows_sent: 0 Rows_examined: 0 Rows_affected: 2 Rows_read: 0 # Bytes_sent: 11 SET timestamp=1449105655; REPLACE INTO edgemgr_dbcache(id, type, data, expire_time) VALUES(UNHEX('ec124ee5766c4a31819719c645dab895'), 'sermap', '{\"storages\":{\"sg1-s1\":[{\"download_port\":9083,\"p2p_port\":9035,\"rtmp_port\":9035,\"addr\":\"{\\\"l\\\":{\\\"https://192.168.1.227:9184/storage\\\":\\\"\\\"},\\\"m\\\":{},\\\"i\\\":{\\\"https://192.168.1.227:9184/storage\\\":\\\"\\\"}}\",\"cpu\":6,\"mem\":100,\"bandwidth\":0,\"disk\":0,\"dead\":0}]},\"lives\":{}}', '2016-01-02 09:20:55');
於是5.6.17版本只須要修改logstash.conf配置文件中的grok節內容以下後重啓logstash進程便可。
grok { match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s*Id: %{NUMBER:id:int}\s+# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n# Time:.*$" ] }
Kibana日誌輸出