中文指南:https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/detailshtml
ELK Stack包含:ElasticSearch、Logstash、Kibanajava
ElasticSearch是一個搜索引擎,用來搜索、分析、存儲日誌。它是分佈式的,也就是說能夠橫向擴容,能夠自動發現,索引自動分片,總之很強大。文檔https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.htmlnode
Logstash用來採集日誌,把日誌解析爲json格式交給ElasticSearch。mysql
Kibana是一個數據可視化組件,把處理後的結果經過web界面展現linux
Beats在這裏是一個輕量級日誌採集器,其實Beats家族有5個成員git
早期的ELK架構中使用Logstash收集、解析日誌,可是Logstash對內存、cpu、io等資源消耗比較高。相比 Logstash,Beats所佔系統的CPU和內存幾乎能夠忽略不計github
x-pack對Elastic Stack提供了安全、警報、監控、報表、圖表於一身的擴展包,是收費的。web
方法一:yum安裝JDK [root@linux-node1 ~]# yum install -y java [root@linux-node1 ~]# java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode) 方法二:源碼安裝JDK 下載 [root@linux-node1 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz 配置Java環境 [root@linux-node1 ~]# tar zxf jdk-8u151-linux-x64.tar.gz -C /usr/local/ [root@linux-node1 ~]# ln –s /usr/local/jdk1.8.0_91 /usr/local/jdk [root@linux-node1 ~]# vim /etc/profile export JAVA_HOME=/usr/local/jdk export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin [root@linux-node1 ~]# source /etc/profile [root@linux-node1 ~]# java -version ★★★★注:linux-node2節點上也須要安裝JDK
linux-node2節點也須要安裝elasticsearch
使用yum安裝elasticsearch會很慢,建議先下載:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpmsql
安裝elasticsearch [root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm [root@linux-node1 ~]# yum install -y elasticsearch-6.0.0.rpm 配置elasticsearch,linux-node2配置一個相同的節點,經過組播進行通訊,會經過cluster進行查找,若是沒法經過組播查詢,修改爲單播便可。 [root@linux-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name:elk-cluster #集羣名稱 node.name:elk-node1 #節點名稱,一個集羣以內節點的名稱不能重複 path.data:/data/elkdata #數據路徑 path.logs:/data/logs #日誌路徑 bootstrap.memory_lock:true #鎖住es內存,保證內存不分配至交換分區。 network.host:192.168.56.11 #網絡監聽地址 http.port:9200 #用戶訪問查看的端口,9300是組件訪問使用 discovery.zen,ping.unicast.hosts:["192.168.56.11","192.168.56.12"] #單播(配置一臺便可,生產可使用組播方式) ★★★注:內存鎖定須要進行配置須要2G以上內存不然會致使沒法啓動elasticsearch。6.x版本啓用鎖定內存,須要進行如下修改操做: [root@linux-node1 ~]# systemctl edit elasticsearch [Service] LimitMEMLOCK=infinity [root@linux-node1 ~]# systemctl daemon-reload [root@linux-node1 ~]# mkdir /data/{elkdata,logs} #建立數據目錄和日誌目錄 [root@linux-node1 ~]# chown elasticsearch.elasticsearch /data -R [root@linux-node1 ~]# systemctl start elasticsearch.service [root@linux-node1 ~]# netstat -tulnp |grep java tcp6 0 0 192.168.56.11:9200 :::* LISTEN 26866/java tcp6 0 0 192.168.56.11:9300 :::* LISTEN 26866/java 將配置文件拷貝到linux-node2 [root@linux-node1 ~]# scp /etc/elasticsearch/elasticsearch.yml 192.168.56.12:/etc/elasticsearch/ [root@linux-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml 修改: node.name=elk-node2 network.host=192.168.56.12 [root@linux-node2 ~]# mkdir /data/{elkdata,logs} [root@linux-node2 ~]# chown elasticsearch.elasticsearch /data -R [root@linux-node2 ~]# systemctl start elasticsearch.service [root@linux-node2 ~]# netstat -tulnp |grep java tcp6 0 0 192.168.56.12:9200 :::* LISTEN 16346/java tcp6 0 0 192.168.56.12:9300 :::* LISTEN 16346/java
1.下載並安裝GPG key [root@linux-node1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 2.添加yum倉庫 [root@linux-node1 ~]# vim /etc/yum.repos.d/es.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md 3.安裝elasticsearch [root@hadoop-node1 ~]# yum install -y elasticsearch
可使用命令來查看elasticsearch的狀態內容數據庫
[root@linux-node1 ~]# curl http://192.168.56.11:9200/_cluster/health?pretty=true { "cluster_name" : "elk-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } [root@linux-node2 ~]# curl http://192.168.56.12:9200/_cluster/health?pretty=true { "cluster_name" : "elk-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } [root@linux-node1 ~]# curl -i -XGET 'http://192.168.56.11:9200/_count?' #查看es裏面有什麼內容 HTTP/1.1 200 OK content-type: application/json; charset=UTF-8 content-length: 71 {"count":0,"_shards":{"total":0,"successful":0,"skipped":0,"failed":0}} 解釋: 返回頭部200,執行成功0個,返回0個 curl http://192.168.56.11:9200/_cluster/health?pretty 健康檢查 curl http://192.168.56.11:9200/_cluster/state?pretty 集羣詳細信息 注:可是咱們不可能常常經過命令來查看集羣的信息,這裏就使用elasticsearch的插件--head 插件是爲了完成不一樣的功能,而官方提供了一些插件但大部分是收費的,另外也有一些開發愛好者提供的插件。能夠實現對elasticsearch集羣的狀態與管理配置等功能。
插件做用:主要是作集羣管理的插件
Github下載地址:https://github.com/mobz/elasticsearch-head
安裝Head插件 [root@linux-node1 ~]# wget https://nodejs.org/dist/v8.10.0/node-v8.10.0-linux-x64.tar.xz [root@linux-node1 ~]# tar xf node-v8.10.0-linux-x64.tar.xz [root@linux-node1 ~]# mv node-v8.10.0-linux-x64 /usr/local/node [root@linux-node1 ~]# vim /etc/profile export NODE_HOME=/usr/local/node export PATH=$PATH:$NODE_HOME/bin [root@linux-node1 ~]# source /etc/profile [root@linux-node1 ~]# which node /usr/local/node/bin/node [root@linux-node1 ~]# node -v v8.10.0 [root@linux-node1 ~]# which npm /usr/local/node/bin/npm [root@linux-node1 ~]# npm -v 5.6.0 [root@linux-node1 ~]# npm install -g cnpm --registry=https://registry.npm.taobao.org [root@linux-node1 ~]# npm install -g grunt-cli --registry=https://registry.npm.taobao.org [root@linux-node1 ~]# grunt -version grunt-cli v1.2.0 [root@linux-node1 ~]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip [root@linux-node1 ~]# unzip master.zip [root@linux-node1 ~]# cd elasticsearch-head-master/ [root@linux-node1 elasticsearch-head-master]# vim Gruntfile.js 90 connect: { 91 server: { 92 options: { 93 hostname: '192.168.56.11', 94 port: 9100, 95 base: '.', 96 keepalive: true 97 } 98 } 99 } [root@linux-node1 elasticsearch-head-master]# vim _site/app.js 4354 this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.56.11:9200"; [root@linux-node1 elasticsearch-head-master]# cnpm install [root@linux-node1 elasticsearch-head-master]# grunt --version grunt-cli v1.2.0 grunt v1.0.1 [root@linux-node1 elasticsearch-head-master]# vim /etc/elasticsearch/elasticsearch.yml 90 # ---------------------------------- Head -------------------------------------增長以下兩行: 91 # 92 http.cors.enabled: true 93 http.cors.allow-origin: "*" [root@linux-node1 elasticsearch-head-master]# systemctl restart elasticsearch [root@linux-node1 elasticsearch-head-master]# systemctl status elasticsearch [root@linux-node1 elasticsearch-head-master]# grunt server & (node:2833) ExperimentalWarning: The http2 module is an experimental API. Running "connect:server" (connect) task Waiting forever... Started connect web server on http://192.168.56.11:9100 注:在elasticsearch 2.x之前的版本能夠經過/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head來安裝head插件,在elasticsearch 5.x以上版本須要經過npm進行安裝。 瀏覽器訪問:http://192.168.56.11:9100,能夠看到各個節點的狀態信息,如圖:
Logstash是一個開源的數據收集引擎,能夠水平伸縮,並且logstash是整個ELK當中擁有最多插件的一個組件,其能夠接收來自不一樣源的數據並統一輸入到指定的且能夠是不一樣目的地。
logstash收集日誌基本流程: input–>codec–>filter–>codec–>output
1.input:從哪裏收集日誌。
2.filter:發出去前進行過濾
3.output:輸出至Elasticsearch或Redis消息隊列
4.codec:輸出至前臺,方便邊實踐邊測試
5.數據量不大日誌按照月來進行收集
環境準備:關閉防火牆和Selinux,而且安裝java環境 logstash下載地址:https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm [root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm [root@linux-node1 ~]# yum install -y logstash-6.0.0.rpm [root@linux-node1 ~]# rpm -ql logstash [root@linux-node1 ~]# chown -R logstash.logstash chown -R logstash.logstash /usr/share/logstash/data/queue #權限更改成logstash用戶和組,不然啓動的時候日誌報錯 #node2節點安裝logstash [root@linux-node2 ~]# yum install -y logstash-6.0.0.rpm [root@linux-node1 ~]# ll /etc/logstash/conf.d/ #logstash的主配置目錄 總用量 0
input { 指定輸入 } output { 指定輸出 }
使用rubydebug方式前臺輸出展現以及測試
#標準輸入輸出 [root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug} }' OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console The stdin plugin is now waiting for input: hello #輸入 { "@version" => "1", #@version時間版本號,一個事件就是一個ruby對象 "host" => "linux-node1", #host標記事件發生在哪裏 "@timestamp" => 2017-12-08T14:56:25.395Z, #@timestamp,用來標記當前事件發生的時間 "message" => "hello" #消息的具體內容 }
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log"} }' OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console The stdin plugin is now waiting for input: hello world welcome to beijing! [root@linux-node1 ~]# tailf /tmp/test-2018.03.14.log {"@version":"1","host":"linux-node1","@timestamp":"2018-03-14T07:57:27.096Z","message":"hello world"} {"@version":"1","host":"linux-node1","@timestamp":"2018-03-14T07:58:29.074Z","message":"welcome to beijing!"} 開啓gzip壓縮輸出 [root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } outpu{ file { path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip => true } }' OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N what's your name? [root@linux-node1 ~]# ll /tmp/test-2018.03.14.log.tar.gz -rw-r--r-- 1 root root 117 3月 14 16:00 /tmp/test-2018.03.14.log.tar.gz
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["192.168.56.110:9200"] index => "logstash-test-%{+YYYY.MM.dd}" } }' OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console The stdin plugin is now waiting for input: what's your name ? my name is kim. 驗證elasticsearch服務器收到數據 [root@linux-node1 ~]# ll /data/elkdata/nodes/0/indices/ 總用量 0 drwxr-xr-x 8 elasticsearch elasticsearch 65 3月 14 16:05 cV8nUO0WSkmR990aBH0RiA drwxr-xr-x 8 elasticsearch elasticsearch 65 3月 14 15:18 Rca-tNpDSt20jWxEheyIrQ
從head插件上能夠看到有索引:logstash-test-2018-03-04,而且經過數據瀏覽能夠看到剛纔輸入的數據。
★★★★★
在該界面刪除testindex,」動做」–>」刪除」,再查看上面目錄.
tips:在刪除數據時,在該界面刪除,切勿在上面的目錄刪除,由於集羣節點上每一個都有這樣的數據,刪除某一個,可能會致使elasticsearch沒法啓動。
Kibana 是爲 Elasticsearch 設計的開源分析和可視化平臺。你可使用 Kibana 來搜索,查看存儲在 Elasticsearch 索引中的數據並與之交互。你能夠很容易實現高級的數據分析和可視化,以圖表的形式展示出來。
kiabana下載地址:https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm [root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm [root@linux-node1 ~]# yum install -y kibana-6.0.0-x86_64.rpm [root@linux-node1 ~]# vim /etc/kibana/kibana.yml [root@linux-node1 ~]# grep "^[a-Z]" /etc/kibana/kibana.yml server.port: 5601 #監聽端口 server.host: "192.168.56.11" #監聽IP地址,建議內網ip elasticsearch.url: "http://192.168.56.11:9200" #elasticsearch鏈接kibana的URL,也能夠填寫192.168.56.12,由於它們是一個集羣 [root@linux-node1 ~]# systemctl enable kibana Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service. [root@linux-node1 ~]# systemctl start kibana 監聽端口爲:5601 [root@linux-node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:9100 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.56.11:5601 *:* LISTEN 0 128 ::ffff:192.168.56.11:9200 :::* LISTEN 0 128 ::ffff:192.168.56.11:9300 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 80 :::3306 :::*
瀏覽器訪問192.168.56.11:5601,如圖:
能夠經過http://192.168.56.11:5601/status 來查看看是否正常,若是不正常,是沒法進入到上圖界面
在Kibana上展現上一節收集的日誌信息,添加索引,如圖:
點擊「discover」查看收集的信息,如圖:
前提須要logstash用戶對被收集的日誌文件有讀的權限並對寫入的文件有寫權限。
編輯logstash的配置文件: [root@linux-node1 ~]# vim /etc/logstash/conf.d/system.conf input { file { path => "/var/log/messages" #日誌路徑 type => "systemlog" #類型,自定義,在進行多個日誌收集存儲時能夠經過該項進行判斷輸出 start_position => "beginning" #logstash 從什麼位置開始讀取文件數據,默認是結束位置(end),也就是說 logstash 進程會以相似 tail -F 的形式運行。若是你是要導入原有數據,把這個設定改爲"beginning",logstash 進程就從頭開始讀取,相似 less +F 的形式運行。 stat_interval => "2" #logstash 每隔多久檢查一次被監聽文件狀態(是否有更新) ,默認是 1 秒。 } } output { elasticsearch { hosts => ["192.168.56.11:9200"] #指定hosts index => "logstash-systemlog-%{+YYYY.MM.dd}" #索引名稱 } } [root@linux-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t #檢測配置文件是否有語法錯誤 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console Configuration OK [root@linux-node1 ~]# ll /var/log/messages -rw-------. 1 root root 791209 12月 27 11:43 /var/log/messages #這裏能夠看到該日誌文件是600權限,而elasticsearch是運行在elasticsearch用戶下,這樣elasticsearch是沒法收集日誌的。因此這裏須要更改日誌的權限,不然會報權限拒絕的錯誤。在日誌中查看/var/log/logstash/logstash-plain.log 是否有錯誤。 [root@linux-node1 ~]# chmod 644 /var/log/messages [root@linux-node1 ~]# systemctl restart logstash
在管理界面查看是否有相應的索引(logstash-systemlog-2017.12.27),如圖:
添加到Kibana中展現,建立索引:
查看日誌
修改logstash的配置文件,這裏增長收集數據庫mariadb的日誌: [root@linux-node1 ~]# vim /etc/logstash/conf.d/system.conf input { file { path => "/var/log/messages" type => "systemlog" start_position => "beginning" stat_interval => "2" } file { path => "/var/log/mariadb/mariadb.log" type => "mariadblog" start_position => "beginning" stat_interval => "2" } } output { if [type] == "systemlog" { #使用if來判斷類型,並輸出到elasticsearch和file,展現一個out能夠做多樣輸出 elasticsearch { hosts => ["192.168.56.11:9200"] index => "logstash-systemlog-%{+YYYY.MM.dd}" } file { path => "/tmp/logstash-systemlog-%{+YYYY.MM.dd}" }} if [type] == "mariadblog" { elasticsearch { hosts => ["192.168.56.11:9200"] index => "logstash-mariadblog-%{+YYYY.MM.dd}" } file { path => "/tmp/logstash-mariadblog-%{+YYYY.MM.dd}" }} } 配置文件檢測語法是否正常: [root@linux-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console Configuration OK 重啓logstash: [root@linux-node1 ~]# systemctl restart logstash 修改mariadb的日誌權限: [root@linux-node1 ~]# ll /var/log/mariadb/ -d drwxr-x--- 2 mysql mysql 24 12月 4 17:43 /var/log/mariadb/ [root@linux-node1 ~]# chmod 755 /var/log/mariadb/ [root@linux-node1 ~]# ll /var/log/mariadb/mariadb.log -rw-r----- 1 mysql mysql 114993 12月 27 14:23 /var/log/mariadb/mariadb.log [root@linux-node1 ~]# chmod 644 /var/log/mariadb/mariadb.log
經過head插件查看:
查看是否在/tmp下收集到了日誌數據
[root@linux-node1 ~]# ll /tmp/logstash-* -rw-r--r-- 1 logstash logstash 288449 12月 27 14:27 /tmp/logstash-mariadblog-2017.12.27 -rw-r--r-- 1 logstash logstash 53385 12月 27 14:28 /tmp/logstash-systemlog-2017.12.27
Kibana建立索引: