對於日誌來講,最多見的需求就是收集、存儲、查詢、展現,開源社區正好有相對應的開源項目:logstash(收集)、elasticsearch(存儲+搜索)、kibana(展現),咱們將這三個組合起來的技術稱之爲ELKStack,因此說ELKStack指的是Elasticsearch、Logstash、Kibana技術棧的結合java
CentOS Linux release 7.6.1810 (Core)
服務 | IP地址 | 主機名 |
---|---|---|
elasticsearch | 10.201.1.145 | k8s-m1 |
elasticsearch | 10.201.1.146 | k8s-n1 |
logstash | 10.201.1.145 | k8s-n1 |
kibana | 10.201.1.146 | k8s-n1 |
yum install -y java
rpm -ivh elasticsearch-2.3.5.rpm sudo systemctl daemon-reload systemctl enable elasticsearch.service rpm -ql elasticsearch
grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml cluster.name: myes #節點名稱,全部節點要保持一致 node.name: k8s-m1 #節點名稱,通常設置爲主機名,不能和其餘節點重複 path.data: /data/es-data #數據存放路徑 path.logs: /var/log/elasticsearch #log存放路徑 bootstrap.mlockall: true #保證內存不被放到交換分區裏面 network.host: 10.201.1.145 #當前主機IP http.port: 9200 #服務端口
grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml cluster.name: myes node.name: k8s-n1 path.data: /data/es-data path.logs: /var/log/elasticsearch bootstrap.mlockall: true #保證內存不被放到交換分區裏面 network.host: 10.201.1.146 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.201.1.145", "10.201.1.146"] #單播的方式發佈本身的服務,讓其餘es節點識別
mkdir -p /data/es-data chown -R elasticsearch:elasticsearch /data/es-data
/etc/init.d/elasticsearch start tail -f /var/log/elasticsearch/myes.log netstat -lntpu|grep 9200
安裝head插件node
/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
瀏覽器訪問插件,查看狀態nginx
http://10.201.1.145:9200/_plugin/head/apache
以下圖示,代表安裝正常json
安裝kopf插件bootstrap
/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
瀏覽器訪問插件,查看狀態瀏覽器
http://10.201.1.145:9200/_plugin/kopf/ruby
以下圖示,代表安裝正常服務器
logstash的運行須要依賴Java環境,前面已經安裝過elasticsearch
rpm -ivh logstash-2.3.4-1.noarch.rpm rpm -ql logstash
命令行模式下驗證:
/opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }' #標準的輸入和輸出 /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }' /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.201.1.145"] index => "logstash-%{+YYYY.MM.dd}"} }' #將輸入輸出到es中,並建立索引
寫入配置文件形式驗證(默認讀取的目錄爲/etc/logstash/conf.d/):
cat file.conf input{ file{ path => ["/var/log/messages", "/var/log/secure"] type => "system-log" start_position => "beginning" } } filter{ } output{ elasticsearch { hosts => ["10.201.1.145:9200"] index => "system-log-%{+YYYY.MM}" } }
cat /etc/logstash/conf.d/file.conf input{ file{ #file 是收集本地文件 path => ["/var/log/messages", "/var/log/secure"] #收集的路徑 type => "system-log" #索引和指定的類型,爲下面作判斷 start_position => "beginning" #從頭開始收集 } file{ path => "/var/log/elasticsearch/myes.log" type => "es-log" start_position => "beginning" codec => multiline{ #使用codec插件,至關於轉換格式,使用多行合併參數,適合java日誌收集 pattern => "^\[" # 設置正則匹配。根據本身java日誌格式來設置 negate => true what => "previous" #匹配到正則則和上文合併 } } } filter{ } output{ if [type] == "system-log" { #作判斷,區分不一樣的日誌類型 elasticsearch { hosts => ["10.201.1.145:9200"] index => "system-log-%{+YYYY.MM}" } } if [type] == "es-log" { elasticsearch { hosts => ["10.201.1.145:9200"] index => "es-log-%{+YYYY.MM}" } } }
須要先把nginx的日誌類型改成json
log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';
寫配置文件
cat /etc/logstash/conf.d/nginx.conf input{ file { path => "/var/log/nginx/access.log_json" codec => "json" } } filter{ } output{ elasticsearch { hosts => ["10.201.1.145:9200"] index => "nginx-access-log-%{+YYYY.MM.dd}" } }
修改要收集日誌主機的rsyslog配置文件
[root@k8s-m1 ~]# tail -2 /etc/rsyslog.conf *.* @@10.201.1.146:514 # ### end of the forwarding rule ###
編寫收集配置文件
[root@k8s-n1 ~]# cat rsyslog.conf input{ syslog { type => "system-syslog" port => 514 } } filter{ } output{ elasticsearch { hosts => ["10.201.1.146:9200"] index => "system-syslog-%{+YYYY.MM}" } }
編寫配置文件
[root@k8s-n1 ~]# cat tcp.conf input{ tcp { port => 6666 mode => "server" type => "tcp" } } output{ stdout { codec => rubydebug } }
用另外一臺機器,發信息驗證(下面列舉幾種方法)
yum -y install nc echo "lxd" | nc 10.201.1.146 6666 nc 10.201.1.146 6666 < /etc/resolv.conf echo "123" > /dev/tcp/10.201.1.146/6666
[root@k8s-m1 ~]# cat apache.conf input { file { path => "/var/log/httpd/access_log" type => "apache_log" start_position => "beginning" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } output { elasticsearch { hosts => ["10.201.1.145:9200"] index => "apache-log-%{+YYYY.MM.dd}" } }
前臺運行,而後去登陸es查看收集的日誌
http://10.201.1.145:9200/_plugin/head/
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf
Kibana 是爲 Elasticsearch 設計的開源分析和可視化平臺。你可使用 Kibana 來搜索,查看存儲在 Elasticsearch 索引中的數據並與之交互。你能夠很容易實現高級的數據分析和可視化,以圖表的形式展示出來。
rpm -ivh kibana-4.5.4-1.x86_64.rpm rpm -ql kibana
grep '^[a-z]' /opt/kibana/config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://10.201.1.145:9200" kibana.index: ".kibana"
/etc/init.d/kibana start netstat -lntpu | grep 5601
瀏覽器訪問:http://10.201.1.146:5601
添加在es中建立的索引
查看收集到的日誌信息