一、elk說明
elk全稱:
elasticsearch:
是一個分佈式、高擴展、高實時的搜索與數據分析引擎;簡稱es
logstash:
是開源的服務器端數據處理管道,可以同時從多個來源採集數據,轉換數據,而後將數據發送到您最喜歡的「存儲庫」中;如elasticsearch中
kibana:
是爲 Elasticsearch設計的開源分析和可視化平臺。你可使用 Kibana 來搜索,查看存儲在 Elasticsearch 索引中的數據並與之交互。你能夠很容易實現高級的數據分析和可視化,以圖標的形式展示出來。
以上三個組件就是常說的elk~php
二、快速部署配置elk
1)部署環境:
Centos7,本文基於7.x部署
172.16.0.213 elasticsearch
172.16.0.217 elasticsearch
172.16.0.219 elasticsearch kibana
kibana只要在其中一臺部署便可;
2)配置官方yum源
三臺均配置repo源html
$ cat /etc/yum.repos.d/elast.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
3)安裝
$ cat /etc/hosts
172.16.0.213 ickey-elk-213
172.16.0.217 ickey-elk-217
172.16.0.219 ickey-elk-219
$ yum install elasticsearch -yjava
4)配置node
$ cat /etc/elasticsearch/elasticsearch.yml cluster.name: elk_test ### 集羣名 node.name: ickey-elk-217 ### 節點名須要按節點配置 node.master: true node.data: true path.data: /var/log/elasticsearch/data path.logs: /var/log/elasticsearch/logs network.host: 172.16.0.217 ### 節點ip transport.tcp.port: 9300 transport.tcp.compress: true http.port: 9200 http.max_content_length: 100mb bootstrap.memory_lock: true discovery.seed_hosts: ["172.16.0.213","172.16.0.217","172.16.0.219"] cluster.initial_master_nodes: ["172.16.0.213","172.16.0.217","172.16.0.219"] gateway.recover_after_nodes: 2 gateway.recover_after_time: 5m gateway.expected_nodes: 3
修改elasticsearch啓動內存分配:
$ /etc/elasticsearch/jvm.options 中
-Xms4g
-Xmx4g
內在通常是系統內存80%左右;分別表示預加載內存和最高使用內存
此時啓動elasticsearch
$ systemctl elasticsearch startmysql
5)安裝kibana
就在219上安裝
$ yum install kinbana -y
配置linux
$ cat /etc/kibana/kibana.yml|egrep -v "(^$|^#)" server.port: 5601 server.host: "172.16.0.219" server.name: "ickey-elk-219" elasticsearch.hosts: ["http://172.16.0.213:9200","http://172.16.0.217:9200","http://172.16.0.219:9200"] elasticsearch.username: "kibana" elasticsearch.password: "pass" elasticsearch.requestTimeout: 40000 logging.dest: /var/log/kibana/kibana.log # 日誌輸出,默認輸出到了/var/log/message i18n.locale: "zh-CN" # 中文界面
詳情配置參考:
https://www.elastic.co/guide/cn/kibana/current/settings.htmlnginx
二、logstash安裝配置及實踐
上面已經所存儲搜索的es和展現及搜索圖片化的kibana安裝配置完成,數據獲取部分就須要logstash和beat這裏主要使用到了logstash和filebeat
lostash收集日誌比較重量級,配置也相對複雜點;可定製收集的功能也不少,這裏除了安裝給也常見配置整理:
1)安裝
經過yum源安裝,安裝源同上
yum install logstash -y
logstash須要jdk支持;所以須要先安裝配置java jdk版本1.8及以上便可;
這裏安裝 jdk-8u211-linux-x64.rpm git
$cat /etc/profile.d/java.sh xport JAVA_HOME=/usr/java/latest export JAVA_BIN=${JAVA_HOME}/bin export PATH=${PATH}:${JAVA_HOME}/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME JAVA_BIN PATH CLASSPATH export JRE_HOME=/usr/java/latest
安裝完成後須要執行 /usr/share/logstash/bin/system-install
Centos 6的系統經過如下方式管理服務
initctl status|start|stop|restart logstash
CentOS7:
systemctl restart logstashgithub
2)實踐配置
收集nginx日誌:(nginx服務器上執行)web
$ cat /etc/logstash/conf.d/nginx-172.16.0.14.conf input { file { path => ["/var/log/nginx/test.log"] codec => json sincedb_path => "/var/log/logstash/null" discover_interval => 15 stat_interval => 1 start_position => "beginning" } } filter { date { locale => "en" timezone => "Asia/Shanghai" match => [ "timestamp", "ISO8601" ,"yyyy-MM-dd'T'HH:mm:ssZZ" ] } mutate { convert => [ "upstreamtime", "float" ] } mutate { gsub => ["message", "\x", "\\x"] } if [user_agent] { useragent { prefix => "remote_" source => "user_agent" } } if [request] { ruby { init => "@kname = ['method1','uri1','verb']" code => "new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))]) new_event.remove('@timestamp') new_event.remove('method1') event.append(new_event)" remove_field => [ "request" ] } } geoip { source => "clientRealIp" target => "geoip" database => "/tmp/GeoLite2-City.mmdb" add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] } mutate { convert => [ "[geoip][coordinates]", "float", "upstream_response_time","float", "responsetime","float", "body_bytes_sent","integer", "bytes_sent","integer"] } } output { elasticsearch { hosts => ["172.16.0.219:9200"] index => "logstash-nginx-%{+YYYY.MM.dd}" workers => 1 template_overwrite => true } }
注意須要nginx中的日誌格式配置爲:
log_format logstash '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"host":"$server_addr",' '"size":$body_bytes_sent,' '"domain":"$host",' '"method":"$request_method",' '"url":"$uri",' '"request":"$request",' '"status":"$status",' '"referer":"$http_referer",' '"user_agent":"$http_user_agent",' '"body_bytes_sent":"$body_bytes_sent",' '"bytes_sent":"$bytes_sent",' '"clientRealIp":"$clientRealIp",' '"forwarded_for":"$http_x_forwarded_for",' '"responsetime":"$request_time",' '"upstreamhost":"$upstream_addr",' '"upstream_response_time":"$upstream_response_time"}';
配置成接收syslog
$ cat /etc/logstash/conf.d/rsyslog-tcp.conf input { syslog { type => "system-syslog" host => "172.16.0.217" port => 1514 } } filter { if [type] == "system-syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { if [type] == "system-syslog" { elasticsearch { hosts => ["172.16.0.217:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" #workers => 1 template_overwrite => true } } }
客戶端須要配置:
$ tail -fn 1 /etc/rsyslog.conf . @172.16.0.217:1514
配置收集硬件日誌服務器
[yunwei@ickey-elk-217 ~]$ cat /etc/logstash/conf.d/hardware.conf
input { syslog { type => "hardware-syslog" host => "172.16.0.217" port => 514 } } filter { if [type] == "hardware-syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { if [type] == "hardware-syslog" { elasticsearch { hosts => ["172.16.0.217:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" } } }
三、filebeat安裝配置及應用實踐
1)說明
filebeat 原先是基於 logstash-forwarder 的源碼改造出來的。換句話說:filebeat 就是新版的 logstash-forwarder,也會是 Elastic Stack 在 shipper 端的第一選擇。
下圖摘自官方,es logstasilebeat.pngh filbeat kafa redis之間的關係;如圖:
2)安裝
一樣基於以上的yum源
$ yum install filebeat -y
3)配置之收集runtime和php-fpm錯誤日誌
[root@ickey-app-api-52 yunwei]# cat /etc/filebeat/filebeat.yml #=========================== Filebeat inputs ============================= filebeat.inputs: - type: log enabled: true paths: - /home/wwwroot/.ickey.cn/runtime/logs/.log fields: type: "runtime" json.message_key: log json.keys_under_root: true - type: log enabled: true paths: - /var/log/php-fpm/www-error.log fields: type: "php-fpm" #============================= Filebeat modules =============================== filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 2 #============================== Kibana ===================================== setup.kibana: host: "172.16.0.219:5601" #============================= Elastic Cloud ================================== output.elasticsearch: hosts: ["172.16.0.213:9200","172.16.0.217:9200","172.16.0.219:9200"] indices: - index: "php-fpm-log-%{+yyyy.MM.dd}" when.equals: fields.type: "php-fpm" - index: "runtime-log-%{+yyyy.MM.dd}" when.equals: fields.type: "runtime" pipelines: - pipeline: "php-error-pipeline" when.equals: fields.type: "php-fpm" #================================ Processors ===================================== processors: - add_host_metadata: ~ - add_cloud_metadata: ~ #================================ Logging ===================================== logging.level: info logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat keepfiles: 7 permissions: 0644
說明:
php-fpm error.log格式以下:
[29-Oct-2019 11:33:01 PRC] PHP Fatal error: Call to a member function getBSECollection() on null in /var/html/wwwroot/framework/Excel5.php on line 917
因爲 咱們須要提取其中的時間,PHP Fatal error 及出錯的行數;在logstash中收集須要定義 grok,filebeat則須要經過ingest處理,大概過程 是這樣的filebeat先獲取內容 放到logstash上 經過ingest定義輸出成咱們相要的樣子;
所以須要在logstash上作以下操做:
[root@ickey-elk-213 ~]# cat phperror-pipeline.json { "description": "php error log pipeline", "processors": [ { "grok": { "field": "message", "patterns": "%{DATA:datatime} PHP .*: %{DATA:errorinfo} in %{DATA:error-url} on line %{NUMBER:error-line}" } } ] }
應用 :
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/php-error-pipeline' -d@phperror-pipeline.json
查詢: curl -H 'Content-Type: application/json' -GET 'http://localhost:9200/_ingest/pipeline/php-error-pipeline' 刪除: curl -H 'Content-Type: application/json' -XDELETE 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'
收集數據庫日誌:
filebeat.inputs: - type: log paths: - /var/log/mysql/mysql.err fields: type: "mysqlerr" exclude_files: ['Note'] multiline.pattern: '^[0-9]{4}.*' multiline.negate: true multiline.match: after filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: True setup.template.settings: index.number_of_shards: 2 setup.kibana: host: "172.16.0.219:5601" output.elasticsearch: hosts: ["172.16.0.213:9200"] indices: - index: "mysql-err-%{+yyyy.MM.dd}" when.equals: fields.type: "mysqlerr" processors: - add_host_metadata: ~ - add_cloud_metadata: ~
四、安裝配置elasticsearch-head
elasticsearch-head是開源的,圖形化查看操做es中索引web界面;
1)安裝
$ git clone https://github.com/mobz/elasticsearch-head.git $ cd elasticsearch-head $ registry=https://registry.npm.taobao.org $ npm install grunt -save -- └─┬ grunt@1.0.1 .....省略.... ├── path-is-absolute@1.0.1 └── rimraf@2.2.8 npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression $ npm install --registry=https://registry.npm.taobao.org npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead [ ............] - fetchMetadata: verb afterAdd /root/.npm/debug/2.6.9/package/package.json written
此步須要等待一段時間
2)配置開機自啓服務
$ cat /usr/bin/elasticsearch-head #!/bin/bash # chkconfig: - 25 75 # description: starts and stops the elasticsearch-head data="cd /usr/local/src/elasticsearch-head/; nohup npm run start > /dev/null 2>&1 & " START(){ eval $data && echo -e "elasticsearch-head start\033[32m ok\033[0m" } STOP(){ ps -ef |grep grunt |grep -v "grep" |awk '{print $2}' |xargs kill -s 9 > /dev/null && echo -e "elasticsearch-head stop\033[32m ok\033[0m" } STATUS(){ PID=$(ps aux |grep grunt|grep -v grep|awk '{print $2}') } case "$1" in start) START ;; stop) STOP ;; restart) STOP sleep 3 START ;; *) echo "Usage: elasticsearch-head (start|stop|restart)" ;; esac
訪問:
http://172.16.0.219:9100 如圖: