ELK Stack 筆記

ELK Stackhtml

ELK Stack

ELK 介紹

LOG有多重要這個不言而喻, 面對如此大量的數據,又是分佈在不一樣地方,如何快速準確的查找日誌?使用傳統的方法,去登錄到一臺臺機器上查看?這種方法無疑顯得很是笨拙和低效了。因而一些聰明人就提出了創建一套集中式的方法,把不一樣來源的數據集中整合到一個地方。java

一個完整的集中式日誌系統,是離不開如下幾個主要特色的node

  • 收集-可以採集多種來源的日誌數據
  • 傳輸-可以穩定的把日誌數據傳輸到中央系統
  • 存儲-如何存儲日誌數據
  • 分析-能夠支持 UI 分析
  • 警告-可以提供錯誤報告,監控機制

基於上述思路,因而許多產品或方案就應運而生了。好比,簡單的 RsyslogSyslog-ng;商業化的 Splunk ;開源的有 FaceBook 公司的 Scribe,ApacheChukwaLinkedinKafakClouderaFluentdELK 等等。
在上述產品中,Splunk 是一款很是優秀的產品,可是它是商業產品,價格昂貴,讓許多人望而卻步。
直到 ELK 的出現,讓你們又多了一種選擇。相對於其餘幾款開源軟件來講,本文重點介紹 ELKlinux

ELK 不是一款軟件,而是 ElasticsearchLogstashKibana三種軟件產品的首字母縮寫。這三者都是開源軟件,一般配合使用,並且又前後歸於 Elastic.co 公司名下,因此被簡稱爲 ELK Stackwebpack

  • Elasticsearch:一個基於 Restful 分佈式搜索和分析引擎,具備高可伸縮、高可靠和易管理等特色。基於 Apache Lucene 構建,能對大容量的數據進行接近實時的存儲、搜索和分析操做。一般被用做某些應用的基礎搜索引擎,使其具備複雜的搜索功能。目前不少網站都在使用Elasticearch進行全文檢索,例如:GitHubStackOverflow等。
  • Logstash:數據收集引擎。它支持動態的從各類數據源蒐集數據,並對數據進行過濾、分析、豐富、統一格式等操做,而後存儲到用戶指定的位置。
  • Kibana:數據分析和可視化平臺。一般與 Elasticsearch 配合使用,對其中數據進行搜索、分析和以統計圖表的方式展現;
  • FilebeatELK Stack 的新成員,一個輕量級開源日誌文件數據蒐集器,基於 Logstash-Forwarder 源代碼開發,是對它的替代。在須要採集日誌數據的 Server 上安裝 Filebeat,並指定日誌目錄或日誌文件後,Filebeat就能讀取數據,迅速發送到 Logstash進行解析,亦或直接發送到 Elasticsearch 進行集中式存儲和分析。

架構

Logstash 讀取Log發送至 Elasticsearch , kibana 經過 Elasticsearch 提供的RestfulAPI查詢日誌。git

Alt text

能夠看成一個MVC模型,LogstashController 層,Elasticsearch]2 是一個 Model 層,kibanaView層。github

Elasticsearch

安裝

# 不能使用ROOT用戶啓動,因此建立一個新用戶
[root@WEB-PM0121 ~] groupadd elk # 添加用戶組
[root@WEB-PM0121 ~] useradd -g elk elk # 添加用戶到指定用戶組
[root@WEB-PM0121 ~] passwd elk # 爲指定用戶設置密碼
[root@WEB-PM0121 bin] # su elk # 切換用戶
 
[elk@WEB-PM0121 ~] # java -version # 查看JAVA版本
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8 .0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
 
sudo yum install java -1.8 .0-openjdk #若是沒有則須要安裝
 
# 下載Elasticsearch
[elk@WEB-PM0121 ~] # wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
- -2018 -05 -16 14: 45: 50-- https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch -6.2 .4.tar.gz
Resolving artifacts.elastic.co... 54.235 .171 .120, 107.21 .237 .95, 107.21 .253 .15, ...
Connecting to artifacts.elastic.co| 54.235 .171 .120|: 443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 29056810 ( 28M) [ binary/octet-stream]
Saving to: 「elasticsearch -6.2 .4.tar.gz .2
72% [=========================================================> ] 21, 151, 222 1.22M/s eta 9s
 
tar xzvf elasticsearch -6.2 .4.tar.gz # 解壓
 
# 目錄結構
[elk@WEB-PM0121 ~] # cd elasticsearch-6.2.4
[elk@WEB-PM0121 elasticsearch -6.2 .4] # pwd
/home/chenxu/elasticsearch -6.2 .4
[elk@WEB-PM0121 elasticsearch -6.2 .4] # ls
bin config data lib LICENSE.txt logs modules NOTICE.txt plugins README.textile vi
[elk@WEB-PM0121 elasticsearch -6.2 .4] #
 
# 修改配置文件
[elk@WEB-PM0121 elasticsearch -6.2 .4] # cd config
[elk@WEB-PM0121 config] # vi elasticsearch.yml
cluster.name: cxelk # 友好名稱
network.host: 0.0 .0 .0 # 要否則只能本機訪問
 
# 啓動
[elk@WEB-PM0121 config] # cd ../bin
[elk@WEB-PM0121 bin] # ./elasticsearch
# 默認是前臺啓動,能夠用./elasticsearch& 或者 ./elasticsearch -d 後端啓動
 
# 驗證訪問,出現出現JSON則證實啓動成功
[root@WEB-PM0121 bin] # curl 'http://10.12.54.127:9200'
{
"name" : "SvJ09aS",
"cluster_name" : "cxelk",
"cluster_uuid" : "WbsI8yKWTsKUwhU8Os8vJQ",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

常見問題

  • ERROR: bootstrap checks failed:max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
    緣由:沒法建立本地文件問題,用戶最大可建立文件數過小
    解決方案:
    切換到root用戶,編輯limits.conf配置文件, 添加相似以下內容:
    vi /etc/security/limits.conf
    添加以下內容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
  • max number of threads [1024] for user [es] likely too low, increase to at least [2048]
    緣由:沒法建立本地線程問題,用戶最大可建立線程數過小
    解決方案:切換到root用戶,進入limits.d目錄下,修改90-nproc.conf 配置文件。
    vi /etc/security/limits.d/90-nproc.conf
    修改 * soft nproc 1024 爲 * soft nproc 2048web

  • max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
    緣由:最大虛擬內存過小
    解決方案:切換到root用戶下,修改配置文件sysctl.conf
    vi /etc/sysctl.conf
    添加下面配置:vm.max_map_count=655360
    並執行命令:sysctl -papache

  • Exception in thread 「main」 2017-11-10 06:29:49,106 main ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property ‘log4j2.debug’ to show Log4j2 internal initialization logging.ElasticsearchParseException[malformed, expected settings to start with ‘object’, instead was [VALUE_STRING]]
    緣由:elasticsearch.yml中的配置項的格式有問題
    解決方案:請儘可能保持冒號前面沒空格,後面一個空格,不要用tab鍵
    bootstrap.memory_lock: falsenpm

關閉 Elasticsearch

[root@WEB-PM0121 bin] # ps -ef | grep elastic
[root@WEB-PM0121 bin] # kill -9 2782 # 2782 爲線程號

Elasticsearch-head

[elk@WEB-PM0121] # wget https://github.com/mobz/elasticsearch-head/archive/master.zip # 下載head插件
[elk@WEB-PM0121] # unzip master.zip # 解壓
[elk@WEB-PM0121] # cd elasticsearch-head-master # 進入head目錄
[elk@WEB-PM0121 elasticsearch-head] # npm install # 安裝
[elk@WEB-PM0121 elasticsearch-head] # npm run start # 運行
[root@WEB-PM0121 elasticsearch-head] # curl 'http://127.0.0.1:9100' # 測試訪問出現html

Alt text

Kibana

[root@WEB-PM0121 ~] # wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz # 下載Kibana
[root@WEB-PM0121 ~] # tar xzvf kibana-6.2.4-linux-x # 解壓
 
# 目錄結構
[elk@WEB-PM0121 kibana -6.2 .4-linux-x86_64]$ cd ..
[elk@WEB-PM0121 chenxu]$ cd kibana -6.2 .4-linux-x86_64
[elk@WEB-PM0121 kibana -6.2 .4-linux-x86_64]$ ll
total 1196
drwxr-xr-x 2 1000 1000 4096 Apr 13 04: 57 bin
drwxrwxr-x 2 1000 1000 4096 May 14 15: 18 config
drwxrwxr-x 2 1000 1000 4096 May 14 15: 07 data
-rw-rw-r-- 1 1000 1000 562 Apr 13 04: 57 LICENSE.txt
drwxrwxr-x 6 1000 1000 4096 Apr 13 04: 57 node
drwxrwxr-x 909 1000 1000 36864 Apr 13 04: 57 node_modules
-rw-rw-r-- 1 1000 1000 1134238 Apr 13 04: 57 NOTICE.txt
drwxrwxr-x 3 1000 1000 4096 Apr 13 04: 57 optimize
-rw-rw-r-- 1 1000 1000 721 Apr 13 04: 57 package.json
drwxrwxr-x 2 1000 1000 4096 Apr 13 04: 57 plugins
-rw-rw-r-- 1 1000 1000 4772 Apr 13 04: 57 README.txt
drwxr-xr-x 15 1000 1000 4096 Apr 13 04: 57 src
drwxrwxr-x 5 1000 1000 4096 Apr 13 04: 57 ui_framework
drwxr-xr-x 2 1000 1000 4096 Apr 13 04: 57 webpackShims
 
# 修改配置文件
[elk@WEB-PM0121 kibana -6.2 .4-linux-x86_64]$ cd config
[elk@WEB-PM0121 config]$ vi kibana.yml
 
# 修改如下配置節點
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200" # elasticsearch 端口
kibana.index: ".kibana"
 
# 啓動Kibana
[elk@WEB-PM0121 config]$ cd ../bin
[elk@WEB-PM0121 config]$ ./kibana
 
# 驗證Kibana
 
[elk@WEB-PM0121 bin]$ curl '127.0.0.1:5601'
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
 
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;

Logstash

因爲生產系統基於.NET,因此 LogstashWindows 下部署, 在 Logstash 下載頁面下載對應的壓縮包

配置文件格式

Logstash 須要一個配置管理輸入、過濾器和輸出相關的配置。配置內容格式以下

# 輸入
input {
...
}
 
# 過濾器
filter {
...
}
 
# 輸出
output {
...
}

測試輸入輸出

測試一下輸入輸出, 在Logstash中的config文件夾下新建 logstash_test.conf鍵入測試代碼

input { stdin { } } output { stdout {} }
E:\Dev\ELK\logstash -6.2 .3\bin>logstash -f ../config/logstash_test.conf # 啓動並指定配置文件
Sending Logstash 's logs to E:/Dev/ELK/logstash-6.2.3/logs which is now configure
d via log4j2.properties
[ 2018 -05 -17T14: 04: 26, 229][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=> "fb_apache", :directory=> "E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[ 2018 -05 -17T14: 04: 26, 249][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=> "netflow", :directory=> "E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[ 2018 -05 -17T14: 04: 26, 451][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml' file because modules or command line options are specified
[ 2018 -05 -17T14: 04: 27, 193][INFO ][logstash.runner ] Starting Logstash { "
logstash.version"=> "6.2.3"}
[ 2018 -05 -17T14: 04: 28, 016][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=> 9600}
[ 2018 -05 -17T14: 04: 29, 038][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=> "main", "pipeline.workers"=> 4, "pipeline.batch.size"=> 125, "pipelin
e.batch.delay"=> 50}
[ 2018 -05 -17T14: 04: 29, 164][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=> "main", :thread=> "#<Thread:0x47319180 run>"}
The stdin plugin is now waiting for input:
[ 2018 -05 -17T14: 04: 29, 378][INFO ][logstash.agent ] Pipelines running {:
count=> 1, :pipelines=>[ "main"]}
123 # 輸入測試數據
2018 -05 -17T06: 05: 00.467Z PC201801151216 123 # 輸出的結果
456 # 輸入測試數據
2018 -05 -17T06: 05: 04.877Z PC201801151216 456 # 輸出的結果

發送至Elasticsearch

咱們須要從文件中讀取併發送到 elasticsearch中。
Logstash中的config文件夾下新建logstash.conf鍵入代碼

input {
file { # 指定文件模式
path => "E:/WebSystemLog/*" # 測試日誌文件
start_position => "beginning"
}
}
output {
elasticsearch{
hosts=> [ "http://10.12.54.127:9200"]
index => "chenxu-%{+YYYY.MM.dd}"
}
stdout {} # 控制檯打印
}

Logstash根目錄新建一個run.bat方便咱們啓動Logstash鍵入代碼

./bin/logstash.bat -f ./config/logstash.conf
E:\Dev\ELK\logstash- 6.2. 3\bin>cd ..
 
E:\Dev\ELK\logstash- 6.2. 3>run # 啓動
 
E:\Dev\ELK\logstash- 6.2. 3>./bin/logstash.bat -f ./config/logstash.conf
Sending Logstash 's logs to E:/Dev/ELK/logstash-6.2.3/logs which is now configure
d via log4j2.properties
[2018-05-17T15:17:36,317][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"fb_apache", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[2018-05-17T15:17:36,334][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"netflow", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[2018-05-17T15:17:36,533][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml ' file because modules or command line options are specified
[2018-05-17T15:17:37,127][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"6.2.3"}
[2018-05-17T15:17:37,682][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2018-05-17T15:17:39,774][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipelin
e.batch.delay"=>50}
[2018-05-17T15:17:40,170][INFO ][logstash.outputs.elasticsearch] Elasticsearch p
ool URLs updated {:changes=>{:removed=>[], :added=>[http://10.12.54.127:9200/]}}
 
[2018-05-17T15:17:40,179][INFO ][logstash.outputs.elasticsearch] Running health
check to see if an Elasticsearch connection is working {:healthcheck_url=>http:/
/10.12.54.127:9200/, :path=>"/"}
[2018-05-17T15:17:40,366][WARN ][logstash.outputs.elasticsearch] Restored connec
tion to ES instance {:url=>"http://10.12.54.127:9200/"}
[2018-05-17T15:17:40,425][INFO ][logstash.outputs.elasticsearch] ES Output versi
on determined {:es_version=>6}
[2018-05-17T15:17:40,430][WARN ][logstash.outputs.elasticsearch] Detected a 6.x
and above cluster: the `type` event field won't be used to determine the documen
t _type {:es_version=> 6}
[ 2018- 05- 17T15: 17: 40, 445][INFO ][logstash.outputs.elasticsearch] Using mapping t
emplate from {:path=>nil}
[ 2018- 05- 17T15: 17: 40, 462][INFO ][logstash.outputs.elasticsearch] Attempting to i
nstall template {:manage_template=>{ "template"=> "logstash-*", "version"=> 60001,
"settings"=>{ "index.refresh_interval"=> "5s"}, "mappings"=>{ "_default_"=>{ "dynami
c_templates"=>[{ "message_field"=>{ "path_match"=> "message", "match_mapping_type"=
> "string", "mapping"=>{ "type"=> "text", "norms"=>false}}}, { "string_fields"=>{ "ma
tch"=> "*", "match_mapping_type"=> "string", "mapping"=>{ "type"=> "text", "norms"=>
false, "fields"=>{ "keyword"=>{ "type"=> "keyword", "ignore_above"=> 256}}}}}], "pro
perties"=>{ "@timestamp"=>{ "type"=> "date"}, "@version"=>{ "type"=> "keyword"}, "geo
ip"=>{ "dynamic"=>true, "properties"=>{ "ip"=>{ "type"=> "ip"}, "location"=>{ "type"=
> "geo_point"}, "latitude"=>{ "type"=> "half_float"}, "longitude"=>{ "type"=> "half_f
loat"}}}}}}}}
[ 2018- 05- 17T15: 17: 40, 502][INFO ][logstash.outputs.elasticsearch] New Elasticsear
ch output {:class=> "LogStash::Outputs::ElasticSearch", :hosts=>[ "http://10.12.54
.127:9200"]}
[ 2018- 05- 17T15: 17: 41, 094][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=> "main", :thread=> "#<Thread:0x31bffa29 run>"}
[ 2018- 05- 17T15: 17: 41, 199][INFO ][logstash.agent ] Pipelines running {:
count=> 1, :pipelines=>[ "main"]}
 
# 因爲配置文件中指定了目錄`E:/WebSystemLog/*`因此咱們手動修改該文件隨便鍵入幾行測試日誌
# 能夠看到 logstash stdout 已經在控制檯中打印出來了
2018- 05- 17T07: 19: 13.779Z PC201801151216 SDFSDFSD
2018- 05- 17T07: 19: 13.781Z PC201801151216 SDFSDF
2018- 05- 17T07: 19: 13.781Z PC201801151216 SDFSD
2018- 05- 17T07: 19: 13.781Z PC201801151216 SDFSDF
2018- 05- 17T07: 19: 13.781Z PC201801151216 SDFSDF
2018- 05- 17T07: 19: 13.745Z PC201801151216 TEST123
2018- 05- 17T07: 19: 13.781Z PC201801151216 SDFSDF

Kibana中查看數據

Management > Index Patterns > Create Index Pattern > Next step

Alt text

選擇 @timestamp > Create index pattern > Discover
能夠看到咱們測試的數據已經在Kibana中了。

Alt text

參考

相關文章
相關標籤/搜索