這裏的環境接着上面的ELK快速入門-基本部署文章繼續下面的操做。html
1)logstash
配置文件編寫前端
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/system-log.conf input { file { path => "/var/log/messages" type => "systemlog" start_position => "beginning" stat_interval => "3" } file { path => "/var/log/secure" type => "securelog" start_position => "beginning" stat_interval => "3" } } output { if [type] == "systemlog" { elasticsearch { hosts => ["192.168.1.31:9200"] index => "system-log-%{+YYYY.MM.dd}" } } if [type] == "securelog" { elasticsearch { hosts => ["192.168.1.31:9200"] index => "secure-log-%{+YYYY.MM.dd}" } } }
2)給日誌文件賦予可讀權限並重啓logstash
java
[root@linux-elk1 ~]# chmod 644 /var/log/secure [root@linux-elk1 ~]# chmod 644 /var/log/messages [root@linux-elk1 ~]# systemctl restart logstash
3)向被收集的文件中寫入數據;是爲了立刻能在elasticsearch
的web
界面和klbana
的web
界面裏面查看到數據。node
[root@linux-elk1 ~]# echo "test" >> /var/log/secure [root@linux-elk1 ~]# echo "test" >> /var/log/messages
4)在kibana
界面添加system-log
索引模式mysql
5)在kibana
界面添加secure-log
索引模式linux
6)kibana
查看日誌nginx
收集Tomcat
服務器的訪問日誌以及Tomcat
錯誤日誌進行實時統計,在kibana
頁面進行搜索展現,每臺Tomcat
服務器須要安裝logstash
負責收集日誌,而後將日誌發給elasticsearch
進行分析,在經過kibana
在前端展現。web
說明,我這裏在linux-elk2
節點上面裝tomcat
sql
1)下載並安裝tomcat
apache
[root@linux-elk2 ~]# cd /usr/local/ [root@linux-elk2 local]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-9/v9.0.21/bin/apache-tomcat-9.0.21.tar.gz [root@linux-elk2 local]# tar xvzf apache-tomcat-9.0.21.tar.gz [root@linux-elk2 local]# ln -s /usr/local/apache-tomcat-9.0.21 /usr/local/tomcat
2)測試頁面準備
[root@linux-elk2 local]# cd /usr/local/tomcat/webapps/ [root@linux-elk2 webapps]# mkdir webdir [root@linux-elk2 webapps]# echo "<h1>Welcome to Tomcat</h1>" > /usr/local/tomcat/webapps/webdir/index.html
3)tomcat
日誌轉json
[root@linux-elk2 tomcat]# vim /usr/local/tomcat/conf/server.xml <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log" suffix=".txt" pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
4)啓動tomcat
,並進行訪問測試生成日誌
[root@linux-elk2 tomcat]# /usr/local/tomcat/bin/startup.sh [root@linux-elk2 tomcat]# ss -nlt |grep 8080 LISTEN 0 100 :::8080 :::* [root@linux-elk2 tomcat]# ab -n100 -c100 http://192.168.1.32:8080/webdir/ [root@linux-elk2 ~]# tailf /usr/local/tomcat/logs/localhost_access_log.2019-07-05.log {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"} {"clientip":"192.168.1.32","ClientUser":"-","authenticated":"-","AccessTime":"[05/Jul/2019:16:39:18 +0800]","method":"GET /webdir/ HTTP/1.0","status":"200","SendBytes":"27","Query?string":"","partner":"-","AgentVersion":"ApacheBench/2.3"}
5)驗證日誌是否爲json
格式,http://www.kjson.com/
說明:若是是須要收集別的服務器上面的tomcat
日誌,那麼在所須要收集的服務器上面都得安裝logstash
。此處是在linux-elk2
節點上面部署的tomcat
,以前安裝過logstash
。
1)配置logstash
[root@linux-elk2 ~]# vim /etc/logstash/conf.d/tomcat.conf input { file { path => "/usr/local/tomcat/logs/localhost_access_log.*.log" type => "tomcat-access-log" start_position => "beginning" stat_interval => "2" } } output { elasticsearch { hosts => ["192.168.1.31:9200"] index => "logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}" } file { path => "/tmp/logstash-tomcat-132-accesslog-%{+YYYY.MM.dd}" } }
2)檢測配置文件語法,並重啓logstash
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat.conf -tWARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-07-05 17:04:34.583 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified Configuration OK [root@linux-elk2 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd [root@linux-elk2 ~]# systemctl start logstash
3)權限修改,否則elasticsearch
界面和kibana
界面是沒法查看到的
[root@linux-elk2 ~]# ll /usr/local/tomcat/logs/ -d drwxr-xr-x 2 root root 197 7月 5 16:36 /usr/local/tomcat/logs/ [root@linux-elk2 ~]# ll /usr/local/tomcat/logs/ 總用量 64 -rw-r----- 1 root root 14228 7月 5 16:36 catalina.2019-07-05.log -rw-r----- 1 root root 14228 7月 5 16:36 catalina.out -rw-r----- 1 root root 0 7月 5 16:25 host-manager.2019-07-05.log -rw-r----- 1 root root 1074 7月 5 16:36 localhost.2019-07-05.log -rw-r----- 1 root root 26762 7月 5 17:23 localhost_access_log.2019-07-05.log -rw-r----- 1 root root 0 7月 5 16:25 manager.2019-07-05.log [root@linux-elk2 ~]# chown logstash.logstash /usr/local/tomcat/logs/ -R [root@linux-elk2 ~]# ll /usr/local/tomcat/logs/ 總用量 64 -rw-r----- 1 logstash logstash 14228 7月 5 16:36 catalina.2019-07-05.log -rw-r----- 1 logstash logstash 14228 7月 5 16:36 catalina.out -rw-r----- 1 logstash logstash 0 7月 5 16:25 host-manager.2019-07-05.log -rw-r----- 1 logstash logstash 1074 7月 5 16:36 localhost.2019-07-05.log -rw-r----- 1 logstash logstash 26762 7月 5 17:23 localhost_access_log.2019-07-05.log -rw-r----- 1 logstash logstash 0 7月 5 16:25 manager.2019-07-05.log
4)訪問elasticsearch
界面驗證插件
數據瀏覽
5)在kibana
上添加索引模式
6)kibana
驗證數據
使用codec的multiline插件實現多行匹配,這是一個能夠將多行進行合併的插件,並且能夠使用what指定將匹配到的行與前面的行合併仍是和後面的行合併,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
語法格式:
input { stdin { codec => multiline { pattern => "^\[" #當遇到[開頭的行時候將多行進行合併 negate => true #true爲匹配成功進行操做,false爲不成功進行操 what => "previous" #與上面的行合併,若是是下面的行合併就是 } } }
命令行測試輸入輸出:
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin { codec => multiline { pattern => "^\[" negate => true what => "previous" } } } output { stdout { codec => rubydebug }}' WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-07-08 15:28:04.938 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-07-08 15:28:04.968 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"} [INFO ] 2019-07-08 15:28:19.167 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-07-08 15:28:19.918 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xc8dd9a1 run>"} The stdin plugin is now waiting for input: 111111 222222 aaaaaa [44444 { "@timestamp" => 2019-07-08T07:34:48.063Z, "tags" => [ [0] "multiline" ], "@version" => "1", "message" => "[12\n111111\n222222\naaaaaa", "host" => "linux-elk2.exmaple.com" } 444444 aaaaaa [77777 { "@timestamp" => 2019-07-08T07:35:51.522Z, "tags" => [ [0] "multiline" ], "@version" => "1", "message" => "[44444\n444444\naaaaaa", "host" => "linux-elk2.exmaple.com" }
示例:收集ELK集羣日誌
1)觀察日誌文件,elk
集羣日誌都是以"["
開頭而且每個信息都是如此。
[root@linux-elk2 ~]# tailf /elk/logs/ELK-Cluster.log [2019-07-08T11:26:37,774][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana] [2019-07-08T11:26:47,664][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana] [2019-07-08T11:33:55,150][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana] [2019-07-08T11:33:55,197][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc] [2019-07-08T11:33:55,822][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana] [2019-07-08T11:33:55,905][INFO ][o.e.c.m.MetaDataMappingService] [elk-node2] [.kibana_1/yRee-8HYS8KiVwnuADXAbA] update_mapping [doc] [2019-07-08T11:33:57,026][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elk-node2] adding template [kibana_index_template:.kibana] for index patterns [.kibana] [2019-07-08T11:43:20,262][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][young][8759][66] duration [1.3s], collections [1]/[1.7s], total [1.3s]/[4s], memory [176mb]->[111.6mb]/[1.9gb], all_pools {[young] [64.8mb]->[706.4kb]/[66.5mb]}{[survivor] [3.3mb]->[3mb]/[8.3mb]}{[old] [107.8mb]->[107.8mb]/[1.9gb]} [2019-07-08T11:43:20,388][WARN ][o.e.m.j.JvmGcMonitorService] [elk-node2] [gc][8759] overhead, spent [1.3s] collecting in the last [1.7s] [2019-07-08T11:44:42,955][INFO ][o.e.x.m.p.NativeController] [elk-node2] Native controller process has stopped - no new native processes can be started
2)配置logstash
[root@linux-elk2 ~]# vim /etc/logstash/conf.d/java.conf input { file { path => "/elk/logs/ELK-Cluster.log" type => "java-elk-cluster-log" start_position => "beginning" stat_interval => "2" code => multiline { pattern => "^\[" #以"["開頭進行正則匹配,匹配規則 negate => "true" #正則匹配成功,false匹配不成功 what => "previous" #和前面的內容進行合併,若是是和下面的合併就是next } } } output { if [type] == "java-elk-cluster-log" { elasticsearch { hosts => ["192.168.1.31:9200"] index => "java-elk-cluster-log-%{+YYYY.MM.dd}" } } }
3)檢查配置文件語法是否有誤並重啓logstash
[root@linux-elk2 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-07-08 15:49:51.996 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified Configuration OK [INFO ] 2019-07-08 15:50:04.438 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@linux-elk2 ~]# systemctl restart logstash
4)訪問elasticsearch
界面驗證數據
5)在kibana
上添加索引驗證模式
6)kibana
驗證數據
收集nginx
的json
訪問日誌,這裏爲了測試,是在一臺新的服務器上面安裝了nginx
和logstash
1)安裝nginx並準備一個測試頁面
[root@node01 ~]# yum -y install nginx [root@node01 ~]# echo "<h1>whelcom to nginx server</h1>" > /usr/share/nginx/html/index.html [root@node01 ~]# systemctl start nginx [root@node01 ~]# curl localhost <h1>whelcom to nginx server</h1>
2)將nginx日誌轉換成json格式
[root@node01 ~]# vim /etc/nginx/nginx.conf log_format access_json '{"@timestamp":"$time_iso8601",' '"host":"$server_addr",' '"clientip":"$remote_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamhost":"$upstream_addr",' '"http_host":"$host",' '"url":"$uri",' '"domain":"$host",' '"xff":"$http_x_forwarded_for",' '"referer":"$http_referer",' '"status":"$status"}'; access_log /var/log/nginx/access.log access_json; [root@node01 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@node01 ~]# systemctl restart nginx
3)訪問一次,確認日誌爲json
格式
[root@node01 ~]# tail /var/log/nginx/access.log {"@timestamp":"2019-07-09T11:21:28+08:00","host":"192.168.1.30","clientip":"192.168.1.144","size":33,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.1.30","url":"/index.html","domain":"192.168.1.30","xff":"-","referer":"-","status":"200"}
4)安裝logstash
並配置收集nginx
日誌
#將logstash軟件包copy到nginx服務器上 [root@linux-elk1 ~]# scp logstash-6.8.1.rpm 192.168.1.30:/root/ #安裝logstash [root@node01 ~]# yum -y localinstall logstash-6.8.1.rpm #生成logstash.service啓動文件 [root@node01 ~]# /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd #將logstash啓動用戶更改成root,否則可能會致使收集不到日誌 [root@node01 ~]# vim /etc/systemd/system/logstash.service User=root Group=root [root@node01 ~]# systemctl daemon-reload [root@node01 ~]# vim /etc/logstash/conf.d/nginx.conf input { file { path => "/var/log/nginx/access.log" type => "nginx-accesslog" start_position => "beginning" stat_interval => "2" codec => json } } output { if [type] == "nginx-accesslog" { elasticsearch { hosts => ["192.168.1.31:9200"] index => "logstash-nginx-accesslog-30-%{+YYYY.MM.dd}" } } }
5)檢查配置文件語法是否有誤並重啓logstash
[root@node01 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx.conf -t WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-07-09 11:26:04.277 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified Configuration OK [INFO ] 2019-07-09 11:26:09.055 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@node01 ~]# systemctl restart logstash
6)在kibana
上添加索引驗證模式
7)在kibana
上驗證數據,能夠經過添加篩選,讓日誌更加了然名目
經過logstash
的tcp/udp
插件收集日誌,一般用於在向elasticsearch
日誌補錄丟失的部分日誌,能夠將丟失的日誌經過一個TCP
端口直接寫入到elasticsearch
服務器。
1)logstash配置
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf input { tcp { port => 9889 type => "tcplog" mode => "server" } } output { stdout { codec => rubydebug } }
2)驗證端口是否啓動成功
[root@linux-elk1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-07-09 18:12:07.538 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-07-09 18:12:07.551 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.1"} [INFO ] 2019-07-09 18:12:14.416 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-07-09 18:12:14.885 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x240c27a6 sleep>"} [INFO ] 2019-07-09 18:12:14.911 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:9889", :ssl_enable=>"false"} [INFO ] 2019-07-09 18:12:14.953 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-07-09 18:12:15.223 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} # 新開一個終端驗證端口 [root@linux-elk1 ~]# netstat -nlutp |grep 9889 tcp6 0 0 :::9889 :::* LISTEN 112455/java
3)在別的服務器經過nc命令進行測試,查看logstash是否收到數據
# echo "nc test" | nc 192.168.1.31 9889 #在另一臺服務器上執行 # 在上面啓動logstash的那個終端查看 { "message" => "nc test", "host" => "192.168.1.30", "type" => "tcplog", "@version" => "1", "@timestamp" => 2019-07-09T10:16:48.139Z, "port" => 37102 }
4)經過nc命令發送一個文件,查看logstash收到的數據
# nc 192.168.1.31 9889 < /etc/passwd #一樣在上面執行nc那臺服務器上執行 # 一樣仍是在上面啓動logstash的那個終端查看 { "message" => "mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin", "host" => "192.168.1.30", "type" => "tcplog", "@version" => "1", "@timestamp" => 2019-07-09T10:18:29.186Z, "port" => 37104 } { "message" => "logstash:x:989:984:logstash:/usr/share/logstash:/sbin/nologin", "host" => "192.168.1.30", "type" => "tcplog", "@version" => "1", "@timestamp" => 2019-07-09T10:18:29.187Z, "port" => 37104 }
5)經過僞設備的方式發送消息:
在類Unix操做系統中,設備節點並不必定要對應物理設備。沒有這種對應關係的設備是僞設備。操做系統運用了它們提供的多種功能,tcp只是dev下面衆多僞設備當中的一種設備。
# echo "僞設備" >/dev/tcp/192.168.1.31/9889 #一樣在上面執行nc那臺服務器上執行 # 一樣仍是在上面啓動logstash的那個終端查看 { "message" => "僞設備", "host" => "192.168.1.30", "type" => "tcplog", "@version" => "1", "@timestamp" => 2019-07-09T10:21:32.487Z, "port" => 37106 }
6)將輸出更改到elasticsearch
[root@linux-elk1 ~]# vim /etc/logstash/conf.d/tcp.conf input { tcp { port => 9889 type => "tcplog" mode => "server" } } output { elasticsearch { hosts => ["192.168.1.31:9200"] index => "logstash-tcp-log-%{+YYYY.MM.dd}" } }
7)經過nc命令或僞設備輸入日誌
# echo "僞設備 1" >/dev/tcp/192.168.1.31/9889 # echo "僞設備 2" >/dev/tcp/192.168.1.31/9889
8)在kibana界面建立索引模式
9)驗證數據