在看大型網站的中間件技術,對於Elasticsearch有點興趣,因此將配置流程記錄了一下。java
爲何要用ELKlinux
ELK其實是三個工具,Elastricsearch + LogStash + Kibana,經過ELK,用來收集日誌還有進行日誌分析,最後經過可視化UI進行展現。一開始業務量比較小的時候,經過簡單的SLF4J+Logger在服務器打印日誌,經過grep進行簡單查詢,可是隨着業務量增長,數據量也會不斷增長,因此使用ELK能夠進行大數量的日誌收集和分析sql
簡單畫了一下架構圖
在環境配置中,主要介紹Mac和linux配置,windows系統大體相同,固然,前提是你們都安裝了JDK1.8及以上版本~express
[root@VM_234_23_centos ~]# java -version java version "1.8.0_161" Java(TM) SE Runtime Environment (build 1.8.0_161-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
注意:json
高版本的ELK一樣須要高版本的JDK支持,本文配置的ELK版本是6.0+,因此須要的JDK版本不小於1.8bootstrap
ElasticSearchvim
Elasticsearch 是一個分佈式的 RESTful 風格的搜索和數據分析引擎,可以解決不斷涌現出的各類用例。做爲 Elastic Stack 的核心,它集中存儲您的數據,幫助您發現意料之中以及意料以外的狀況。windows
Mac安裝和運行centos
安裝:brew install elasticsearch 運行:elasticsearch
linux: 從Elasticsearch官方地址下載(也能夠下載完,經過ftp之類的工具傳上去),gz文件的話經過tar進行解壓縮,而後進入bin目錄下運行軟件ruby
[root@VM_234_23_centos app]# curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz [root@VM_234_23_centos app]# tar -zxvf elasticsearch-6.2.4.tar.gz [root@VM_234_23_centos app]# cd elasticsearch-6.2.4 [root@VM_234_23_centos elasticsearch-6.2.4]# ./bin/elasticsearch
注意:
在Linux機器上,運行elasticsearch須要一個新的用戶組,文章最後有Elastic在linux安裝的踩坑記錄
Logstash
Logstash 是開源的服務器端數據處理管道,可以同時從多個來源採集數據,轉換數據,而後將數據發送到您最喜歡的 「存儲庫」 中。(咱們的存儲庫固然是 Elasticsearch。)-官方賣萌
1.軟件安裝
Mac安裝:
brew install logstash
linux安裝:
[root@VM_234_23_centos app]# curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 137M 100 137M 0 0 5849k 0 0:00:24 0:00:24 --:--:-- 6597k [root@VM_234_23_centos app]# tar -zxvf logstash-6.3.2.tar.gz
2.修改配置文件
vim /etc/logstash.conf
conf文件,指定要使用的插件,和配置對應的elasticsearch的hosts
input { stdin { } } output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } }
3.運行
bin/logstash -f logstash.conf
{ "host": "=-=", "version": "6.2.4", "http_address": "127.0.0.1:9600", "id": "5b47e81f-bdf8-48fc-9537-400107a13bd2", "name": "=-=", "build_date": "2018-04-12T22:29:17Z", "build_sha": "a425a422e03087ac34ad6949f7c95ec6d27faf14", "build_snapshot": false }
在elasticsearch日誌中,也能看到logstash正常加入的日誌
[2018-08-16T14:08:36,436][INFO ][o.e.c.m.MetaDataIndexTemplateService] [f2s1SD8] adding template [logstash] for index patterns [logstash-*]
看到這種返回值,表示已經成功安裝和啓動
踩坑:
_在運行的那一步,有可能遇到內存分配錯誤
Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error=’Cannot allocate memory’ (errno=12)
這個錯誤很明顯就是內存不足,因爲我的購買的是騰訊雲1G內存的服務器(若是是壕,請隨意購買更高的配置=-=),已經運行了elasticsearch,致使logstash分配不到足夠的內存,因此最後要修改一下jvm配置。_
[root@VM_234_23_centos logstash-6.3.2]# cd config/ [root@VM_234_23_centos config]# ll total 28 -rw-r--r-- 1 root root 1846 Jul 20 14:19 jvm.options -rw-r--r-- 1 root root 4466 Jul 20 14:19 log4j2.properties -rw-r--r-- 1 root root 8097 Jul 20 14:19 logstash.yml -rw-r--r-- 1 root root 3244 Jul 20 14:19 pipelines.yml -rw-r--r-- 1 root root 1696 Jul 20 14:19 startup.options [root@VM_234_23_centos config]# vim jvm.options
將-Xms1g -Xmx1g修改成
-Xms256m -Xmx256m
而後就能正常啓動了~~
Kibana
1.軟件安裝
Kibana 讓您可以可視化 Elasticsearch 中的數據並操做 Elastic Stack,所以您能夠在這裏解開任何疑問:例如,爲什麼會在凌晨 2:00 被傳呼,雨水會對季度數據形成怎樣的影響。(並且展現的圖標十分酷炫)
Mac安裝
brew install kibana
linux安裝,官方下載地址
[root@VM_234_23_centos app]# curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-6.3.2-linux-x86_64.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 195M 0 271k 0 0 19235 0 2:57:54 0:00:14 2:57:40 26393
在這一步,有可能下載速度奇慢,因此我本地下載好以後,經過rz命令傳輸到服務器
[root@VM_234_23_centos app]# rz rz waiting to receive. Starting zmodem transfer. Press Ctrl+C to cancel. Transferring kibana-6.3.2-linux-x86_64.tar.gz... 100% 200519 KB 751 KB/sec 00:04:27 0 Errors [root@VM_234_23_centos app]# tar -zxvf kibana-6.3.2-linux-x86_64.tar.gz
2.修改配置
修改 config/kibana.yml 配置文件,設置 elasticsearch.url 指向 Elasticsearch 實例。
若是跟我同樣使用默認的配置,能夠不須要修改該文件
3.啓動
[root@VM_234_23_centos kibana]# ./bin/kibana
4.訪問 http://localhost:5601/app/kib...
界面顯示了這麼多功能,下面經過整合SLF4J+LogBack
整合Spring+Logstash
1.修改logstash.conf後,從新啓動logstash
input { # stdin { } tcp { # host:port就是上面appender中的 destination, # 這裏其實把logstash做爲服務,開啓9250端口接收logback發出的消息 host => "127.0.0.1" port => 9250 mode => "server" tags => ["tags"] codec => json_lines } } output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } }
2.在Java應用中引用依賴
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.2</version> </dependency>
3.在Logback.xml中配置日誌輸出
<!--日誌導出的到 Logstash--> <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:9250</destination> <!-- encoder必須配置,有多種可選 --> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" > <!-- "appname":"ye_test" 的做用是指定建立索引的名字時用,而且在生成的文檔中會多了這個字段 --> <customFields>{"appname":"ye_test"}</customFields> </encoder> </appender> <root level="INFO"> <appender-ref ref="stash"/> </root>
因爲我在第一步驟中,沒有指定對應的index,因此在服務啓動的時候,日誌採集器Logstash幫我自動建立了logstash-timestamp的index。
4.在kibana中添加index索引
5.在左邊discover中查看索引信息
6.添加可視化圖表Visualize
還有更多功能還在探索中,首先環境搭起來纔會用動力繼續去學習~
踩坑記錄
啓動報錯
uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
緣由:不能使用Root權限登陸
解決方案:切換用戶
[root@VM_234_23_centos ~]# groupadd es [root@VM_234_23_centos ~]# useradd es -g es -p es [root@VM_234_23_centos ~]# chown es:es /home/app/elasticsearch/ # 切換用戶,記得su - ,這樣才能得到環境變量 [root@VM_234_23_centos ~]# sudo su - es
Exception in thread 「main」 java.nio.file.AccessDeniedException:
錯誤緣由:使用非 root用戶啓動ES,而該用戶的文件權限不足而被拒絕執行。
解決方法:chown -R 用戶名:用戶名 文件(目錄)名
例如:chown -R abc:abc searchengine 再啓動ES就正常了
elasticsearch啓動後報Killed
[2018-07-13T10:19:44,775][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [aggs-matrix-stats] [2018-07-13T10:19:44,779][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [analysis-common] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [ingest-common] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [lang-expression] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [lang-mustache] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [lang-painless] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [mapper-extras] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [parent-join] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [percolator] [2018-07-13T10:19:44,780][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [rank-eval] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [reindex] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [repository-url] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [transport-netty4] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [tribe] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-core] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-deprecation] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-graph] [2018-07-13T10:19:44,781][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-logstash] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-ml] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-monitoring] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-rollup] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-security] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-sql] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-upgrade] [2018-07-13T10:19:44,782][INFO ][o.e.p.PluginsService ] [f2s1SD8] loaded module [x-pack-watcher] [2018-07-13T10:19:44,783][INFO ][o.e.p.PluginsService ] [f2s1SD8] no plugins loaded Killed
修改config目錄下的jvm.options,將堆的大小設置小一點
# Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms512m -Xmx512m
虛擬內存不足
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-07-13T14:02:06,749][DEBUG][o.e.a.ActionModule ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security [2018-07-13T14:02:07,249][INFO ][o.e.d.DiscoveryModule ] [f2s1SD8] using discovery type [zen] [2018-07-13T14:02:09,173][INFO ][o.e.n.Node ] [f2s1SD8] initialized [2018-07-13T14:02:09,174][INFO ][o.e.n.Node ] [f2s1SD8] starting ... [2018-07-13T14:02:09,539][INFO ][o.e.t.TransportService ] [f2s1SD8] publish_address {10.105.234.23:9300}, bound_addresses {0.0.0.0:9300} [2018-07-13T14:02:09,575][INFO ][o.e.b.BootstrapChecks ] [f2s1SD8] bound or publishing to a non-loopback address, enforcing bootstrap checks ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] [2018-07-13T14:02:09,621][INFO ][o.e.n.Node ] [f2s1SD8] stopping ... [2018-07-13T14:02:09,726][INFO ][o.e.n.Node ] [f2s1SD8] stopped [2018-07-13T14:02:09,726][INFO ][o.e.n.Node ] [f2s1SD8] closing ... [2018-07-13T14:02:09,744][INFO ][o.e.n.Node ] [f2s1SD8] closed
須要修改虛擬內存的大小(在root權限下)
[root@VM_234_23_centos elasticsearch]# vim /etc/sysctl.conf # 插入下列代碼後保存退出 vm.max_map_count=655360 [root@VM_234_23_centos elasticsearch]# sysctl -p # 最後重啓elastricsearch
做者:JingQ
來源:https://www.sevenyuan.cn