上篇文章《中小團隊快速構建SQL自動審覈系統》咱們完成了SQL的自動審覈與執行,不只提升了效率還受到了同事的確定,內心美滋滋。但關於慢查詢的收集及處理也耗費了咱們太多的時間和精力,如何在這一塊也能提高效率呢?且看本文講解如何利用ELK作慢日誌收集mysql
ELK最先是Elasticsearch(如下簡稱ES)、Logstash、Kibana三款開源軟件的簡稱,三款軟件後來被同一公司收購,並加入了Xpark、Beats等組件,更名爲Elastic Stack,成爲如今最流行的開源日誌解決方案,雖然有了新名字但你們依然喜歡叫她ELK,如今所說的ELK就指的是基於這些開源軟件構建的日誌系統。web
咱們收集mysql慢日誌的方案以下:正則表達式
目前主要使用的mysql版本有5.五、5.6和5.7,通過仔細對比發現每一個版本的慢查詢日誌都稍有不一樣,以下:sql
5.5版本慢查詢日誌數據庫
# Time: 180810 8:45:12 # User@Host: select[select] @ [10.63.253.59] # Query_time: 1.064555 Lock_time: 0.000054 Rows_sent: 1 Rows_examined: 319707 SET timestamp=1533861912; SELECT COUNT(*) FROM hs_forum_thread t WHERE t.`fid`='50' AND t.`displayorder`>='0';
5.6版本慢查詢日誌json
# Time: 160928 18:36:08 # User@Host: root[root] @ localhost [] Id: 4922 # Query_time: 5.207662 Lock_time: 0.000085 Rows_sent: 1 Rows_examined: 526068 use db_name; SET timestamp=1475058968; select count(*) from redeem_item_consume where id<=526083;
5.7版本慢查詢日誌bootstrap
# Time: 2018-07-09T10:04:14.666231Z # User@Host: bbs_code[bbs_code] @ [10.82.9.220] Id: 9304381 # Query_time: 5.274805 Lock_time: 0.000052 Rows_sent: 0 Rows_examined: 2 SET timestamp=1531130654; SELECT * FROM pre_common_session WHERE sid='Ba1cSC' OR lastactivity<1531129749;
慢查詢日誌異同點:服務器
use db
語句不是每條慢日誌都有的# Time:
下可能跟了多個慢查詢語句# Time: 160918 2:00:03 # User@Host: dba_monitor[dba_monitor] @ [10.63.144.82] Id: 968 # Query_time: 0.007479 Lock_time: 0.000181 Rows_sent: 172 Rows_examined: 344 SET timestamp=1474135203; SELECT table_schema as 'DB',table_name as 'TABLE',CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 *1024 ), 2), '') as 'TOTAL',TABLE_COMMENT FROM information_schema.TABLES ORDER BY data_length + index_length DESC; # User@Host: dba_monitor[dba_monitor] @ [10.63.144.82] Id: 969 # Query_time: 0.003303 Lock_time: 0.000395 Rows_sent: 233 Rows_examined: 233 SET timestamp=1474135203; select TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME,ORDINAL_POSITION,COLUMN_TYPE,ifnull(COLUMN_COMMENT,0) from COLUMNS where table_schema not in ('mysql','information_schema','performance_schema','test');
上邊咱們已經分析了各個版本慢查詢語句的構成,接下來咱們就要開始收集這些數據了,究竟應該怎麼收集呢?session
# Time:
開頭的行可能不存在,且咱們能夠經過SET timestamp
這個值來肯定SQL執行時間,因此選擇過濾丟棄Time行# User@Host:
開始的行,和以SQL語句結尾的行合併爲一條完整的慢日誌語句use db
這一行不是全部慢日誌SQL都存在的,因此不能經過這個來肯定SQL對應的DB,慢日誌中也沒有字段記錄DB,因此這裏建議爲DB建立帳號時添加db name標識,例如咱們的帳號命名方式爲:projectName_dbName
,這樣看到帳號名就知道是哪一個DB了beat.name
這個字段就能夠肯定SQL對應的主機了filebeat完整的配置文件以下:elasticsearch
filebeat.prospectors: - input_type: log paths: - /home/opt/data/slow/mysql_slow.log exclude_lines: ['^\# Time'] multiline.pattern: '^\# Time|^\# User' multiline.negate: true multiline.match: after tail_files: true name: 10.82.9.89 output.kafka: hosts: ["10.82.9.202:9092","10.82.9.203:9092","10.82.9.204:9092"] topic: mysql_slowlog_v2
# Time
開頭的行# Time
或者# User
開頭的行,Time行要先匹配再過濾{"@timestamp":"2018-08-07T09:36:00.140Z","beat":{"hostname":"db-7eb166d3","name":"10.63.144.71","version":"5.4.0"},"input_type":"log","message":"# User@Host: select[select] @ [10.63.144.16] Id: 23460596\n# Query_time: 0.155956 Lock_time: 0.000079 Rows_sent: 112 Rows_examined: 366458\nSET timestamp=1533634557;\nSELECT DISTINCT(uid) FROM common_member WHERE hideforum=-1 AND uid != 0;","offset":1753219021,"source":"/data/slow/mysql_slow.log","type":"log"}
logstash完整的配置文件以下:
input { kafka { bootstrap_servers => "10.82.9.202:9092,10.82.9.203:9092,10.82.9.204:9092" topics => ["mysql_slowlog_v2"] } } filter { json { source => "message" } grok { # 有ID有use match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s%{NUMBER:id:int}\n# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql:int};\n(?<query>.*)" ] # 有ID無use match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s%{NUMBER:id:int}\n# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\nSET\s+timestamp=%{NUMBER:timestamp_mysql:int};\n(?<query>.*)" ] # 無ID有use match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql:int};\n(?<query>.*)" ] # 無ID無use match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\nSET\s+timestamp=%{NUMBER:timestamp_mysql:int};\n(?<query>.*)" ] } date { match => ["timestamp_mysql","UNIX"] target => "@timestamp" } } output { elasticsearch { hosts => ["10.82.9.208:9200","10.82.9.217:9200"] index => "mysql-slowlog-%{+YYYY.MM.dd}" } }
User
、Host
、Query_time
、Lock_time
、timestamp
等。grok段根據咱們前文對mysql慢日誌的分類分別寫不通的正則表達式去匹配,當有多條正則表達式存在時,logstash會從上到下依次匹配,匹配到一條後邊的則再也不匹配。date字段定義了讓SQL中的timestamp_mysql字段做爲這條日誌的時間字段,kibana上看到的實踐排序的數據依賴的就是這個時間打開Kibana添加mysql-slowlog-*
的Index,並選擇timestamp,建立Index Pattern
進入Discover頁面,能夠很直觀的看到各個時間點慢日誌的數量變化,能夠根據左側Field實現簡單過濾,搜索框也方便搜索慢日誌,例如我要找查詢時間大於2s的慢日誌,直接在搜索框輸入query_time: > 2
回車便可
點擊每一條日誌起邊的很色箭頭能查看具體某一條日誌的詳情
若是你想作個大盤統計慢日誌的總體狀況,例如top 10 SQL等,也能夠很方便的經過web界面配置