logstash收集nginx日誌html
使用Beats採集日誌(filebeat)linux
logstash收集nginx日誌nginx
1.redis
133上(在logstash這臺機器上加入這個配置文件)vim
編輯配置文件 vi /etc/logstash/conf.d/nginx.conf//加入以下內容瀏覽器
input { file { path => "/tmp/elk_access.log" start_position => "beginning" type => "nginx" } } filter { grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} } geoip { source => "clientip" } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.193.133:9200"] index => "nginx-test-%{+YYYY.MM.dd}" } }
input { #輸入ruby
file { #此處指定一個文件,把這個文件的內容,做爲logstash的輸入bash
path => "/tmp/elk_access.log" #指定這個文件的路徑less
start_position => "beginning" #指定這個文件從何時開始收集curl
type => "nginx" #自定義
}
}
filter { #這個是對這個日誌對一個過濾(好比輸出格式)。因此訪問日誌的格式也要定義,下面會編輯nginx日誌的格式(第3步驟)
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
source => "clientip"
}
}
output { #輸出
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.193.133:9200"]
index => "nginx-test-%{+YYYY.MM.dd}"
}
}
#(注意花括號)配置不對會找不到nginx-test
2.
下面是配置代理。咱們要產生日誌,得去配置。配置以前要先檢測一下,上面寫的日誌是否正確:
如下在132上操做
檢測配置文件是否有錯
cd /usr/share/logstash/bin
./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
yum install -y nginx #沒有nginx要安裝
vi /etc/nginx/conf.d/elk.conf//寫入以下內容 #寫一個虛擬主機配置文件
server { listen 80; server_name elk.aming.com; location / { proxy_pass http://192.168.193.128:5601; #代理的目標 proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /tmp/elk_access.log main2; #自動生成 }
3.
如下在132上操做
vim /etc/nginx/nginx.conf//增長以下內容 #定義nginx的日誌格式
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time';
nginx -t #解釋
/usr/local/nginx/sbin/nginx -t
/usr/local/nginx/sbin/nginx -s reload
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
systemctl start nginx
綁定hosts 192.168.193.132 elk.aming.com
瀏覽器訪問,檢查是否有日誌產生
systemctl restart logstash
4.
128上curl '192.168.193.128:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open .kibana HUhL8JS6Sgqxr8mSq9UgoQ 1 1 2 0 7.2kb 7.2kb yellow open system-syslog-2019.06 fqLpMdxRTG2EAV5DC8eIMw 5 1 840 0 526.9kb 526.9kb
查看有沒有日誌
ls /tmp/elk_access.log
/tmp/elk_access.log
wc -l !$
wc -l /tmp/elk_access.log 0 /tmp/elk_access.log
cat !$
檢查是否有nginx-test開頭的索引生成
若是有,才能到kibana裏去配置該索引
左側點擊「Managerment」-> 「Index Patterns」-> 「Create Index Pattern」
Index pattern這裏寫nginx-test-* #支持統配,直接這樣寫就能夠
以後點擊左側的Discover
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
使用Beats採集日誌(filebeat)
beats屬於一個輕量的日誌採集器。而logstash在啓動的時候很慢,會消耗資源。那咱們能夠嘗試着使用beats去採集日誌。
beats能夠自定義咱們想要的beats
https://www.elastic.co/cn/products/beats
filebeat metricbeat packetbeat winlogbeat auditbeat heartbeat #filebeat的成員組件(咱們用到的是filebeat,他是針對日誌的)
可擴展,支持自定義構建
在133上執行
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
rpm -ivh filebeat-6.0.0-x86_64.rpm #下載rpm包安裝filebeat
如下是在屏幕上的輸出:
首先編輯配置文件
vim /etc/filebeat/filebeat.yml //增長或者更改 #注意文件的格式,空格
filebeat.prospectors:
- input_type: log #定義input類型是log(此處type前面的input字樣,在新版本不會出現,需注意)
paths: #定義日誌的路徑
- /var/log/messages
output.console: #此處文件裏可能沒有output.console:,須要添加這兩行進去。可是添加完,要將out.elasticsearch:註釋掉。以下圖:
enable: true
/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml //能夠在屏幕上看到對應的日誌信息 #-c指定配置文件,啓動filebeat
如下是以服務的角色出現:
再編輯配置文件
vim /etc/filebeat/filebeat.yml //增長或者更改
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages #針對這個日誌
output.elasticsearch: #注意此處跟上面有區分,就不是output.console:了
hosts: ["192.168.133.130:9200"] #以下圖:
systemctl start filebeat
ps aux |grep filebeat
root 41834 0.2 0.5 299580 5632 ? Ssl 16:54 0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat root 41916 0.1 0.0 112720 680 pts/0 D+ 16:57 0:00 grep --color=auto filebeat
ls /var/log/filebeat/filebeat
/var/log/filebeat/filebeat
less !$
2019-06-21T16:54:31+08:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] 2019-06-21T16:54:31+08:00 INFO Beat UUID: 77c66cbe-be20-482b-8e31-4dc91e3390d2 2019-06-21T16:54:31+08:00 INFO Setup Beat: filebeat; Version: 6.0.0 2019-06-21T16:54:31+08:00 INFO Elasticsearch url: http://192.168.193.128:9200 2019-06-21T16:54:31+08:00 INFO Metrics logging every 30s 2019-06-21T16:54:31+08:00 INFO Beat name: axinlinux-03 2019-06-21T16:54:31+08:00 INFO filebeat start running. 2019-06-21T16:54:31+08:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file. 2019-06-21T16:54:31+08:00 INFO Loading registrar data from /var/lib/filebeat/registry 2019-06-21T16:54:31+08:00 INFO States Loaded from registrar: 0 2019-06-21T16:54:31+08:00 INFO Loading Prospectors: 1 2019-06-21T16:54:31+08:00 INFO Starting prospector of type: log; id: 13761481236662083215 2019-06-21T16:54:31+08:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1 2019-06-21T16:54:31+08:00 INFO Starting Registrar 2019-06-21T16:54:31+08:00 INFO Harvester started for file: /var/log/elasticsearch/aminglinux.log 2019-06-21T16:54:31+08:00 INFO Config reloader started 2019-06-21T16:54:31+08:00 INFO Loading of config files completed. 2019-06-21T16:54:40+08:00 INFO Connected to Elasticsearch version 6.8.0 2019-06-21T16:54:43+08:00 INFO Loading template for Elasticsearch version: 6.8.0 /var/log/filebeat/filebeat
curl '192.168.193.128:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open system-syslog-2019.06 fqLpMdxRTG2EAV5DC8eIMw 5 1 yellow open .kibana HUhL8JS6Sgqxr8mSq9UgoQ 1 1 yellow open nginx-test-2019.06.21 MHZFKf9CTH-wY6YzgD6A7g 5 1 yellow open filebeat-6.0.0-2019.06.21 y0O_XA9gQZ-kCvx_IguilQ 3 1
而後,能夠在kibana上建立新的索引
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
擴展部分
x-pack 收費,免費 http://www.jianshu.com/p/a49d93212eca
https://www.elastic.co/subscriptions
Elastic stack演進 http://70data.net/1505.html
基於kafka和elasticsearch,linkedin構建實時日誌分析系統 http://t.cn/RYffDoE
使用redis http://blog.lishiming.net/?p=463
ELK+Filebeat+Kafka+ZooKeeper 構建海量日誌分析平臺 https://www.cnblogs.com/delgyd/p/elk.html
http://www.jianshu.com/p/d65aed756587