ELK Stack
ELK 介紹
LOG有多重要這個不言而喻, 面對如此大量的數據,又是分佈在不一樣地方,如何快速準確的查找日誌?使用傳統的方法,去登錄到一臺臺機器上查看?這種方法無疑顯得很是笨拙和低效了。因而一些聰明人就提出了創建一套集中式的方法,把不一樣來源的數據集中整合到一個地方。java
一個完整的集中式日誌系統,是離不開如下幾個主要特色的node
- 收集-可以採集多種來源的日誌數據
- 傳輸-可以穩定的把日誌數據傳輸到中央系統
- 存儲-如何存儲日誌數據
- 分析-能夠支持 UI 分析
- 警告-可以提供錯誤報告,監控機制
基於上述思路,因而許多產品或方案就應運而生了。好比,簡單的 Rsyslog,Syslog-ng;商業化的 Splunk ;開源的有 FaceBook 公司的 Scribe,Apache 的 Chukwa,Linkedin 的 Kafak,Cloudera 的 Fluentd,ELK 等等。
在上述產品中,Splunk 是一款很是優秀的產品,可是它是商業產品,價格昂貴,讓許多人望而卻步。
直到 ELK 的出現,讓你們又多了一種選擇。相對於其餘幾款開源軟件來講,本文重點介紹 ELK 。linux
ELK 不是一款軟件,而是 Elasticsearch、Logstash和 Kibana三種軟件產品的首字母縮寫。這三者都是開源軟件,一般配合使用,並且又前後歸於 Elastic.co 公司名下,因此被簡稱爲 ELK Stack
。webpack
架構
Logstash 讀取Log
發送至 Elasticsearch , kibana 經過 Elasticsearch 提供的RestfulAPI
查詢日誌。git
能夠看成一個MVC
模型,Logstash 是 Controller
層,Elasticsearch]2 是一個 Model
層,kibana 是 View
層。github
Elasticsearch
安裝
# 不能使用ROOT用戶啓動,因此建立一個新用戶
[root@WEB-PM0121 ~] groupadd elk
# 添加用戶組
[root@WEB-PM0121 ~] useradd -g elk elk
# 添加用戶到指定用戶組
[root@WEB-PM0121 ~] passwd elk
# 爲指定用戶設置密碼
[root@WEB-PM0121 bin]
# su elk # 切換用戶
[elk@WEB-PM0121 ~]
# java -version # 查看JAVA版本
openjdk version
"1.8.0_151"
OpenJDK Runtime Environment (build
1.8
.0_151-b12)
OpenJDK
64-Bit Server VM (build
25.151-b12, mixed mode)
sudo yum install java
-1.8
.0-openjdk
#若是沒有則須要安裝
# 下載Elasticsearch
[elk@WEB-PM0121 ~]
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
-
-2018
-05
-16
14:
45:
50-- https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch
-6.2
.4.tar.gz
Resolving artifacts.elastic.co...
54.235
.171
.120,
107.21
.237
.95,
107.21
.253
.15, ...
Connecting
to artifacts.elastic.co|
54.235
.171
.120|:
443... connected.
HTTP request sent, awaiting response...
200 OK
Length:
29056810 (
28M) [
binary/octet-stream]
Saving
to: 「elasticsearch
-6.2
.4.tar.gz
.2」
72% [=========================================================> ]
21,
151,
222
1.22M/s eta
9s
tar xzvf elasticsearch
-6.2
.4.tar.gz
# 解壓
# 目錄結構
[elk@WEB-PM0121 ~]
# cd elasticsearch-6.2.4
[elk@WEB-PM0121 elasticsearch
-6.2
.4]
# pwd
/home/chenxu/elasticsearch
-6.2
.4
[elk@WEB-PM0121 elasticsearch
-6.2
.4]
# ls
bin config data
lib LICENSE.txt logs modules NOTICE.txt plugins README.textile vi
[elk@WEB-PM0121 elasticsearch
-6.2
.4]
#
# 修改配置文件
[elk@WEB-PM0121 elasticsearch
-6.2
.4]
# cd config
[elk@WEB-PM0121 config]
# vi elasticsearch.yml
cluster.name: cxelk
# 友好名稱
network.host:
0.0
.0
.0
# 要否則只能本機訪問
# 啓動
[elk@WEB-PM0121 config]
# cd ../bin
[elk@WEB-PM0121 bin]
# ./elasticsearch
# 默認是前臺啓動,能夠用./elasticsearch& 或者 ./elasticsearch -d 後端啓動
# 驗證訪問,出現出現JSON則證實啓動成功
[root@WEB-PM0121 bin]
# curl 'http://10.12.54.127:9200'
{
"name" :
"SvJ09aS",
"cluster_name" :
"cxelk",
"cluster_uuid" :
"WbsI8yKWTsKUwhU8Os8vJQ",
"version" : {
"number" :
"6.2.4",
"build_hash" :
"ccec39f",
"build_date" :
"2018-04-12T20:37:28.497551Z",
"build_snapshot" :
false,
"lucene_version" :
"7.2.1",
"minimum_wire_compatibility_version" :
"5.6.0",
"minimum_index_compatibility_version" :
"5.0.0"
},
"tagline" :
"You Know, for Search"
}
常見問題
- ERROR: bootstrap checks failed:max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
緣由:沒法建立本地文件問題,用戶最大可建立文件數過小
解決方案:
切換到root用戶,編輯limits.conf配置文件, 添加相似以下內容:
vi /etc/security/limits.conf
添加以下內容:
* soft nofile
65536
* hard nofile
131072
* soft nproc
2048
* hard nproc
4096
-
max number of threads [1024] for user [es] likely too low, increase to at least [2048]
緣由:沒法建立本地線程問題,用戶最大可建立線程數過小
解決方案:切換到root用戶,進入limits.d目錄下,修改90-nproc.conf 配置文件。
vi /etc/security/limits.d/90-nproc.conf
修改 * soft nproc 1024 爲 * soft nproc 2048web
-
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
緣由:最大虛擬內存過小
解決方案:切換到root用戶下,修改配置文件sysctl.conf
vi /etc/sysctl.conf
添加下面配置:vm.max_map_count=655360
並執行命令:sysctl -papache
-
Exception in thread 「main」 2017-11-10 06:29:49,106 main ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property ‘log4j2.debug’ to show Log4j2 internal initialization logging.ElasticsearchParseException[malformed, expected settings to start with ‘object’, instead was [VALUE_STRING]]
緣由:elasticsearch.yml中的配置項的格式有問題
解決方案:請儘可能保持冒號前面沒空格,後面一個空格,不要用tab鍵
bootstrap.memory_lock: falsenpm
關閉 Elasticsearch
[root@WEB-PM0121 bin]
[root@WEB-PM0121 bin]
Elasticsearch-head
[elk@WEB-PM0121]
[elk@WEB-PM0121]
[elk@WEB-PM0121]
[elk@WEB-PM0121 elasticsearch-head]
[elk@WEB-PM0121 elasticsearch-head]
[root@WEB-PM0121 elasticsearch-head]
Kibana
[root@WEB-PM0121 ~]
# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz # 下載Kibana
[root@WEB-PM0121 ~]
# tar xzvf kibana-6.2.4-linux-x # 解壓
# 目錄結構
[elk@WEB-PM0121 kibana
-6.2
.4-linux-x86_64]$ cd ..
[elk@WEB-PM0121 chenxu]$ cd kibana
-6.2
.4-linux-x86_64
[elk@WEB-PM0121 kibana
-6.2
.4-linux-x86_64]$ ll
total
1196
drwxr-xr-x
2
1000
1000
4096 Apr
13
04:
57 bin
drwxrwxr-x
2
1000
1000
4096 May
14
15:
18 config
drwxrwxr-x
2
1000
1000
4096 May
14
15:
07 data
-rw-rw-r--
1
1000
1000
562 Apr
13
04:
57 LICENSE.txt
drwxrwxr-x
6
1000
1000
4096 Apr
13
04:
57 node
drwxrwxr-x
909
1000
1000
36864 Apr
13
04:
57 node_modules
-rw-rw-r--
1
1000
1000
1134238 Apr
13
04:
57 NOTICE.txt
drwxrwxr-x
3
1000
1000
4096 Apr
13
04:
57 optimize
-rw-rw-r--
1
1000
1000
721 Apr
13
04:
57 package.json
drwxrwxr-x
2
1000
1000
4096 Apr
13
04:
57 plugins
-rw-rw-r--
1
1000
1000
4772 Apr
13
04:
57 README.txt
drwxr-xr-x
15
1000
1000
4096 Apr
13
04:
57 src
drwxrwxr-x
5
1000
1000
4096 Apr
13
04:
57 ui_framework
drwxr-xr-x
2
1000
1000
4096 Apr
13
04:
57 webpackShims
# 修改配置文件
[elk@WEB-PM0121 kibana
-6.2
.4-linux-x86_64]$ cd config
[elk@WEB-PM0121 config]$ vi kibana.yml
# 修改如下配置節點
server.port:
5601
server.host:
"0.0.0.0"
elasticsearch.url:
"http://localhost:9200"
# elasticsearch 端口
kibana.index:
".kibana"
# 啓動Kibana
[elk@WEB-PM0121 config]$ cd ../bin
[elk@WEB-PM0121 config]$ ./kibana
# 驗證Kibana
[elk@WEB-PM0121 bin]$ curl
<script>var hashRoute =
var defaultRoute =
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
}
else {
window.location = defaultRoute;
Logstash
因爲生產系統基於.NET
,因此 Logstash
在 Windows
下部署, 在 Logstash
下載頁面下載對應的壓縮包。
配置文件格式
Logstash
須要一個配置管理輸入、過濾器和輸出相關的配置。配置內容格式以下
# 輸入
input {
...
}
# 過濾器
filter {
...
}
# 輸出
output {
...
}
測試輸入輸出
測試一下輸入輸出, 在Logstash
中的config
文件夾下新建 logstash_test.conf
鍵入測試代碼
input { stdin { } } output { stdout {} }
E:\Dev\ELK\logstash
-6.2
.3\bin>logstash -f ../config/logstash_test.conf
# 啓動並指定配置文件
Sending Logstash
d via log4j2.properties
[
2018
-05
-17T14:
04:
26,
229][INFO ][logstash.modules.scaffold] Initializing
module
{:module_name=>
"fb_apache", :directory=>
"E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[
2018
-05
-17T14:
04:
26,
249][INFO ][logstash.modules.scaffold] Initializing
module
{:module_name=>
"netflow", :directory=>
"E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[
2018
-05
-17T14:
04:
26,
451][WARN ][logstash.config.source.multilocal] Ignoring the
[
2018
-05
-17T14:
04:
27,
193][INFO ][logstash.runner ] Starting Logstash {
"
logstash.version"=>
"6.2.3"}
[
2018
-05
-17T14:
04:
28,
016][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>
9600}
[
2018
-05
-17T14:
04:
29,
038][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=>
"main",
"pipeline.workers"=>
4,
"pipeline.batch.size"=>
125,
"pipelin
e.batch.delay"=>
50}
[
2018
-05
-17T14:
04:
29,
164][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=>
"main", :thread=>
"#<Thread:0x47319180 run>"}
The stdin plugin
is now waiting
for input:
[
2018
-05
-17T14:
04:
29,
378][INFO ][logstash.agent ] Pipelines running {:
count=>
1, :pipelines=>[
"main"]}
123
# 輸入測試數據
2018
-05
-17T06:
05:
00.467Z PC201801151216
123
# 輸出的結果
456
# 輸入測試數據
2018
-05
-17T06:
05:
04.877Z PC201801151216
456
# 輸出的結果
發送至Elasticsearch
咱們須要從文件中讀取併發送到 elasticsearch
中。
在Logstash
中的config
文件夾下新建logstash.conf
鍵入代碼
input {
file {
# 指定文件模式
path =>
"E:/WebSystemLog/*"
# 測試日誌文件
start_position =>
"beginning"
}
}
output {
elasticsearch{
hosts=> [
"http://10.12.54.127:9200"]
index =>
"chenxu-%{+YYYY.MM.dd}"
}
stdout {}
# 控制檯打印
}
在Logstash
根目錄新建一個run.bat
方便咱們啓動Logstash
鍵入代碼
./bin/logstash.bat -f ./config/logstash.conf
E:\Dev\ELK\logstash-
6.2.
3\bin>cd ..
E:\Dev\ELK\logstash-
6.2.
3>run
E:\Dev\ELK\logstash-
6.2.
3>./bin/logstash.bat -f ./config/logstash.conf
Sending Logstash
's logs to E:/Dev/ELK/logstash-6.2.3/logs which is now configure
d via log4j2.properties
[2018-05-17T15:17:36,317][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"fb_apache", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[2018-05-17T15:17:36,334][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"netflow", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[2018-05-17T15:17:36,533][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml
' file because modules or command line options are specified
[2018-05-17T15:17:37,127][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"6.2.3"}
[2018-05-17T15:17:37,682][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2018-05-17T15:17:39,774][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipelin
e.batch.delay"=>50}
[2018-05-17T15:17:40,170][INFO ][logstash.outputs.elasticsearch] Elasticsearch p
ool URLs updated {:changes=>{:removed=>[], :added=>[http://10.12.54.127:9200/]}}
[2018-05-17T15:17:40,179][INFO ][logstash.outputs.elasticsearch] Running health
check to see if an Elasticsearch connection is working {:healthcheck_url=>http:/
/10.12.54.127:9200/, :path=>"/"}
[2018-05-17T15:17:40,366][WARN ][logstash.outputs.elasticsearch] Restored connec
tion to ES instance {:url=>"http://10.12.54.127:9200/"}
[2018-05-17T15:17:40,425][INFO ][logstash.outputs.elasticsearch] ES Output versi
on determined {:es_version=>6}
[2018-05-17T15:17:40,430][WARN ][logstash.outputs.elasticsearch] Detected a 6.x
and above cluster: the `type` event field won't be used to determine the documen
t _type {:es_version=>
6}
[
2018-
05-
17T15:
17:
40,
445][INFO ][logstash.outputs.elasticsearch] Using mapping t
emplate from {:path=>nil}
[
2018-
05-
17T15:
17:
40,
462][INFO ][logstash.outputs.elasticsearch] Attempting to i
nstall template {:manage_template=>{
"template"=>
"logstash-*",
"version"=>
60001,
"settings"=>{
"index.refresh_interval"=>
"5s"},
"mappings"=>{
"_default_"=>{
"dynami
c_templates"=>[{
"message_field"=>{
"path_match"=>
"message",
"match_mapping_type"=
>
"string",
"mapping"=>{
"type"=>
"text",
"norms"=>false}}}, {
"string_fields"=>{
"ma
tch"=>
"*",
"match_mapping_type"=>
"string",
"mapping"=>{
"type"=>
"text",
"norms"=>
false,
"fields"=>{
"keyword"=>{
"type"=>
"keyword",
"ignore_above"=>
256}}}}}],
"pro
perties"=>{
"@timestamp"=>{
"type"=>
"date"},
"@version"=>{
"type"=>
"keyword"},
"geo
ip"=>{
"dynamic"=>true,
"properties"=>{
"ip"=>{
"type"=>
"ip"},
"location"=>{
"type"=
>
"geo_point"},
"latitude"=>{
"type"=>
"half_float"},
"longitude"=>{
"type"=>
"half_f
loat"}}}}}}}}
[
2018-
05-
17T15:
17:
40,
502][INFO ][logstash.outputs.elasticsearch] New Elasticsear
ch output {:class=>
"LogStash::Outputs::ElasticSearch", :hosts=>[
"http://10.12.54
.127:9200"]}
[
2018-
05-
17T15:
17:
41,
094][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=>
"main", :thread=>
"#<Thread:0x31bffa29 run>"}
[
2018-
05-
17T15:
17:
41,
199][INFO ][logstash.agent ] Pipelines running {:
count=>
1, :pipelines=>[
"main"]}
2018-
05-
17T07:
19:
13.779Z PC201801151216 SDFSDFSD
2018-
05-
17T07:
19:
13.781Z PC201801151216 SDFSDF
2018-
05-
17T07:
19:
13.781Z PC201801151216 SDFSD
2018-
05-
17T07:
19:
13.781Z PC201801151216 SDFSDF
2018-
05-
17T07:
19:
13.781Z PC201801151216 SDFSDF
2018-
05-
17T07:
19:
13.745Z PC201801151216 TEST123
2018-
05-
17T07:
19:
13.781Z PC201801151216 SDFSDF
Kibana中查看數據
在 Management
> Index Patterns
> Create Index Pattern
> Next step
選擇 @timestamp
> Create index pattern
> Discover
能夠看到咱們測試的數據已經在Kibana
中了。
參考