學習ELK日誌平臺(三)

ELKelasticsearchlogstashkibanajavascript

Elastic Stack是原ELK Stack5.0版本加入Beats套件後的新稱呼php

 

解決痛點:html

開發人員不能登陸線上server查看詳細日誌;java

各個系統都有日誌,日誌數據分散難以查找;node

日誌數據量大,查詢速度慢,或數據不夠實時;python

一個調用會涉及多個系統,難以在這些系統的日誌中快速定位數據;mysql

 

典型應用:linux

wKiom1hl-mbBsqGPAAAzskzqEwo918.jpg

 

 

http://lucene.apache.org/nginx

大多數電商的管理後臺,搜索功能(搜訂單、搜用戶)都用lucenegit

百科:

Luceneapache軟件基金會jakarta項目組的一個子項目,是一個開放源代碼的全文檢索引擎工具包,但它不是一個完整的全文檢索引擎,而是一個全文檢索引擎的架構,提供了完整的查詢引擎和索引引擎,部分文本分析引擎(英文與德文兩種西方語言);

Lucene的目的是爲軟件開發人員提供一個簡單易用的工具包,以方便的在目標系統中實現全文檢索的功能,或者是以此爲基礎創建起完整的全文檢索引擎;

Lucene是一套用於全文檢索和搜尋的開源程式庫,由Apache軟件基金會支持和提供,Lucene提供了一個簡單卻強大的應用程式接口,可以作全文索引和搜尋,在Java開發環境裏Lucene是一個成熟的免費開源工具,就其自己而言,Lucene是當前以及最近幾年最受歡迎的免費Java信息檢索程序庫,人們常常提到信息檢索程序庫,雖然與搜索引擎有關,但不該該將信息檢索程序庫與搜索引擎相混淆;

 

 

https://www.elastic.co/

elasticsearchdistributed,restful search and analytics);

kibanavisualize your data,navigate the stack);

beatscollect,parse,and ship in a lightweight fashion);

logstashingest,transform,enrich,and output);

es-hadoopquickly query and get insight into ro your big data);

x-packsecurity,alerting,monitoring,reporting,and graph in one pack);

elastic cloudspin up hosted elasticsearch,kibana,and x-pack);

elastic cloud enterprisemanage a fleet of clusters on any infrastructure);

 

Plugins for Elastic Stack 1.x — 2.x

shield(安全,protect your dataacross the Elastic Stack);

watcher(監控,get notifications about changes in your data);

marvel(管理,keep a pulse on the health of your Elastic Stack);

reportinggenerate,schedule,and send reports of kibana visualizations);

graphexplore meaningful relationships in your data);

 

elasticsearch-1.7.0.tar.gz

logstash-1.5.3.tar.gz

kibana-4.1.1-linux-x64.tar.gz

 

日誌收集系統:

scribefacebookC/C++);

chukwaapache/yahoojava);

kafkalinkedInscala);

flumeclouderajava);

 

 

 

1、

ElasticSearch

https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html

Search in Depth(開發)

Administration,Monitoring,and Deployment(運維)

[root@test5 ~]# cat /sys/block/sda/queue/scheduler   #(默認io schedulercfq,改成deadlinenoop

noop anticipatory deadline [cfq]

 

Elasticsearch是一個創建在全文搜索引擎ApacheLucene(TM)基礎上的搜索引擎,能夠說 Lucene是當今最早進,最高效的全功能開源搜索引擎框架;可是 Lucene 只是一個框架,要充分利用它的功能,你須要使用 JAVA,而且在你的程序中集成 Lucene,更糟的是,你須要作不少的學習瞭解,才能明白它是如何運行的,Lucene確實很是複雜;

Elasticsearch 使用 Lucene 做爲內部引擎,可是在你使用它作全文搜索時,只須要使用統一開發好的API便可,而並不須要瞭解其背後複雜的Lucene的運行原理;固然Elasticsearch並不只僅是Lucene那麼簡單,它不只包括了全文搜索功能,還能夠進行以下工做:分佈式實時文件存儲,並將每個字段都編入索引,使其能夠被搜索;實時分析的分佈式搜索引擎,能夠擴展到上百臺服務器,處理PB級別的結構化或非結構化數據;這麼多的功能被集成到一臺服務器上,你能夠輕鬆地經過客戶端或者任何你喜歡的程序語言與 ES RESTful API 進行交流;

Elasticsearch 的上手是很是簡單的,它附帶了不少很是合理的默認值,這讓初學者很好地避免一上手就要面對複雜的理論,它安裝好了就可使用了,用很小的學習成本就能夠變得頗有生產力;隨着學習的深刻,你還可使用 Elasticsearch 更多高級的功能,整個引擎能夠很靈活地進行配置,你能夠根據自身需求來定製屬於你本身的 Elasticsearch

 

相關術語:節點、集羣、文檔、索引、分片、備份分片(副本);

節點&集羣(節點是elasticsearch運行的實例,集羣是一組有着相同cluster.name的節點,它們協同工做,互相分享數據,提供了故障轉移和擴展功能,另外一個node也能夠是一個cluster);

文檔(程序中的對象不多是單純的鍵值與數值的列表,更多的時候它擁有一個複雜的結構,好比包含了日期、地理位置、對象、數組等,早晚你會把這些對象存儲在數據庫中,你會試圖將這些豐富而又龐大的數據都放到一個由行與列組成的關係數據庫中,而後你不得不根據每一個字段的格式來調整數據,而後每次重建它你都要檢索一遍數據,Elasticsearch是面向文檔型數據庫,這意味着它存儲的是整個對象或者文檔,它不但會存儲它們,還會爲他們創建索引,這樣你就能夠搜索他們了,你能夠在 Elasticsearch 中索引、搜索、排序和過濾這些文檔,不須要成行成列的數據,這將會是徹底不一樣的一種面對數據的思考方式,這也是爲何Elasticsearch能夠執行復雜的全文搜索的緣由;Elasticsearch使用JSON(或稱做JavaScript Object Notation)做爲文檔序列化的格式,JSON 已經被大多數語言支持,也成爲NoSQL領域的一個標準格式,它簡單、簡潔、易於閱讀,可把JSON想象成一個用戶對象,在 Elasticsearch 中,將對象轉換爲 JSON 並做爲索引要比在表結構中作相同的事情簡單多了,幾乎全部的語言都有將任意數據轉換、結構化成JSON,或者將對象轉換爲JSON的模塊,查看serialization 以及marshalling兩個JSON模塊,The official Elasticsearch clients也能夠幫你自動結構化 JSON);

索引(在Elasticsearch中,存儲數據的行爲就叫作索引(indexing),可是在咱們索引數據前,咱們須要決定將數據存儲在哪裏,Elasticsearch裏,文檔屬於一種類型(type),各類各樣的類型存在於一個索引中,你也能夠經過類比傳統的關係數據庫獲得一些大體的類似之處:

關係數據庫  數據庫table  row  (Columns)

Elasticsearch  索引  類型  文檔  字段(Fields)

一個Elasticsearch集羣能夠包含多個索引(數據庫),也就是說其中包含了不少類型(table),這些類型中包含了不少的文檔(row),而後每一個文檔中又包含了不少的字段(column));

 

注:

Elasticsearch中,索引這個詞彙已經被賦予了太多意義:

索引做爲名詞(一個索引就相似於傳統關係型數據庫中的數據庫,這裏就是存儲相關文檔的的地方);

索引做爲動詞(爲一個文檔建立索引是把一個文檔存儲到一個索引(名詞)中的過程,這樣它才能被檢索,這個過程很是相似於SQL中的INSERT命令,若是已經存在文檔,新的文檔將會覆蓋舊的文檔);

反向索引(在關係數據庫中的某列添加一個索引,好比多路搜索樹(B-Tree)索引,就能夠加速數據的取回速度,Elasticsearch以及Lucene使用的是一個叫作反向索引(inverted index)的結構來實現相同的功能,一般,每一個文檔中的字段都被建立了索引(擁有一個反向索引),所以他們能夠被搜索,若是一個字段缺失了反向索引的話,它將不能被搜索);

 

 

elasticsearch通訊,取決因而否使用java

java APInode clienttransport client):若是使用javaelasticsearch內置了兩個client,可在代碼中使用;node client以一個無數據node的身份加入了一個集羣,換句話說,它自身沒有任何數據,可是它知道什麼數據在集羣中的哪個node上,它可將請求轉發到正確的node上進行鏈接;更加輕量的傳輸客戶端可被用來向遠程集羣發送請求,它並不加入集羣自己,而是把請求轉發到集羣中的node;兩個client都使用elasticsearch的傳輸協議,經過9300portjava客戶端進行通訊,集羣中的各個node也是經過9300port進行通訊,若是此port被禁止,那這些node將不能組成一個clusterjava的客戶端的版本號必需要與elasticsearch node所用的版本號同樣,不然它們之間可能沒法識別);

經過HTTPRESTful API傳送json,其它語言可經過9200portelasticsearchRESTful API進行通訊,可以使用curl命令與elasticsearch通訊;

elasticsearch官方提供了不少種編程語言的客戶端,也有和許多社區化軟件的集成插件(javascript.NETphpperlpythonruby);

 

#curl -X <VERB> '<PROTOCOL>://<HOST>:<PORT>/<PATH>?<QUERY_STRING>' -d '<BODY>'

VERBhttp方法,GETPUTPOSTHEADDELETE);

PROTOCOLhttp or https,只有在elasticsearch前面有https代理的時候可用);

HOSTelasticsearch集羣中任何一個node的主機名,本地爲localhost);

PORTelasticsearch http服務的端口,默認9200);

QUERY_STRING(可選,查詢請求參數,例:?pretty,可生成更加美觀易讀的json反饋加強可讀性);

BODYjson格式編碼的請求主體);

舉例:

[root@test4 ~]# curl -X GET 'http://192.168.23.132:9200/_count?pretty' -d '

{

 "query": {

   "match_all": {}

  }

}

'

{

 "count" : 1,

 "_shards" : {

   "total" : 10,

   "successful" : 10,

   "failed" : 0

  }

}

 

注(-I不能和-d同時使用,不然Warning: You can only select one HTTP request!-I只讀取http head-d是用post提交表單):

-I--head(HTTP/FTP/FILE)  Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.

-X <command>--request <command>(HTTP)  Specifies  a custom request method to use when communicating with the HTTP server.  The specified request will be used instead of the method otherwise used (which  defaults to GET). Read the HTTP 1.1 specification for details and explanations. Common additional HTTP requests include PUT and DELETE, but related tech-nologies like WebDAV offers PROPFIND, COPY, MOVE and more. (FTP)Specifies a custom FTP command to use instead of LIST when doing file listswith FTP. If this option is used several times, the last one will be used.

-d <data>--data <data>(HTTP)  Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form  and  presses the  submit  button. This will cause curl to pass the data to the server using the content-type application/x-www-form-urlencoded. Compare to -F/--form.

 

 

wKiom1hl-ouQBTPMAAAztKrOIqM999.jpg

 

wKioL1hl-pjzOLWgAAAerFfpuAo940.jpg

 

wKiom1hl-ruh2649AAA8vxOY8VM245.jpg

 

wKioL1hl-s3CSIRkAAAxz2YfNZM884.jpg

 

 

elasticsearch安裝前要有java環境

[root@test4 ~]# which java

/usr/bin/java

[root@test4 ~]# java -version

java version "1.5.0"

gij (GNU libgcj) version 4.4.7 20120313(Red Hat 4.4.7-17)

Copyright (C) 2007 Free Software Foundation,Inc.

This is free software; see the source forcopying conditions.  There is NO

warranty; not even for MERCHANTABILITY orFITNESS FOR A PARTICULAR PURPOSE.

[root@test4 ~]# rpm -qa | grep java

java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64

libvirt-java-0.4.9-1.el6.noarch

libvirt-java-devel-0.4.9-1.el6.noarch

java_cup-0.10k-5.el6.x86_64

[root@test4 ~]# rpm -e --nodeps java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64

 

[root@test4 ~]# tar xf jdk-8u111-linux-x64.gz -C /usr/local/

[root@test4 ~]# vim /etc/profile.d/java.sh

export JAVA_HOME=/usr/local/jdk1.8.0_111

export PATH=$PATH:$JAVA_HOME/bin

exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[root@test4 ~]# . !$

. /etc/profile.d/java.sh

[root@test4 ~]# java -version

java version "1.8.0_111"

Java(TM) SE Runtime Environment (build1.8.0_111-b14)

Java HotSpot(TM) 64-Bit Server VM (build25.111-b14, mixed mode)

 

[root@test4 ~]# tar xf elasticsearch-1.7.0.tar.gz -C /usr/local/

[root@test4 ~]# ln -sv /usr/local/elasticsearch-1.7.0/ /usr/local/elasticsearch

`/usr/local/elasticsearch' ->`/usr/local/elasticsearch-1.7.0/'

[root@test4 ~]# ll /usr/local/elasticsearch/   #config/{elasticsearch.yml,logging.yml}bin/elasticsearch

total 40

drwxr-xr-x. 2 root root  4096 Dec 15 23:18 bin

drwxr-xr-x. 2 root root  4096 Dec 15 23:18 config

drwxr-xr-x. 3 root root  4096 Dec 15 23:18 lib

-rw-rw-r--. 1 root root 11358 Mar 23  2015 LICENSE.txt

-rw-rw-r--. 1 root root   150 Jun 9  2015 NOTICE.txt

-rw-rw-r--. 1 root root  8700 Jun 9  2015 README.textile

[root@test4 ~]# vim /usr/local/elasticsearch/config/elasticsearch.yml

################################### Cluster###################################

cluster.name: elasticsearch   #LAN內的各node經過此名字標識在一個集羣裏,必須惟一,不能與其它集羣名字相同)

#################################### Node#####################################

node.name: "test4"  

node.master: true   #(該node可否被選舉爲主node

node.data: true   #(該node可否存儲數據)

#################################### Index####################################

index.number_of_shards: 5   #(分片)

index.number_of_replicas: 1   #(備份分片)

#################################### Paths####################################

path.conf: /usr/local/elasticsearch/conf/

path.data: /usr/local/elasticsearch/data/

path.work: /usr/local/elasticsearch/work/   #(臨時文件)

path.logs: /usr/local/elasticsearch/logs/

path.plugins:/usr/local/elasticsearch/plugins/

################################### Memory####################################

bootstrap.mlockall: true   #(鎖內存,不用swap

############################## Network AndHTTP ###############################

#network.bind_host: 192.168.0.1

#network.publish_host: 192.168.0.1

#network.host: 192.168.0.1   #(至關於network.bind_hostnetwork.publish_host,該node與其它node交互)

#transport.tcp.port: 9300

#http.port: 9200

############################# RecoveryThrottling #############################

#indices.recovery.max_bytes_per_sec: 20mb   #(設帶寬)

##################################Discovery ##################################

#discovery.zen.ping.multicast.enabled:false   #(多播(組播))

#discovery.zen.ping.unicast.hosts:["host1", "host2:port"]  #(單播)

[root@test4 ~]#grep '^[a-z]' /usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: elasticsearch

node.name: "test4"

node.master: true

node.data: true

index.number_of_shards: 5

index.number_of_replicas: 1

path.conf: /usr/local/elasticsearch/conf/

path.data: /usr/local/elasticsearch/data/

path.work: /usr/local/elasticsearch/work/

path.logs: /usr/local/elasticsearch/logs/

path.plugins:/usr/local/elasticsearch/plugins/

bootstrap.mlockall: true

[root@test4 ~]# mkdir /usr/local/elasticsearch/{conf,data,work,logs,plugins}

 

[root@test4 ~]# /usr/local/elasticsearch-1.7.0/bin/elasticsearch -d   #(可加入參數,-Xms512m -Xmx512m

log4j:WARN No appenders could be found forlogger (node).

log4j:WARN Please initialize the log4jsystem properly.

log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

[root@test4 ~]# netstat -tnulp | egrep '9200|9300'

tcp       0      0 :::9200                     :::*                        LISTEN      7177/java          

tcp       0      0 :::9300                     :::*                        LISTEN      7177/java 

[root@test4 ~]# jps -lvm  

7177org.elasticsearch.bootstrap.Elasticsearch -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly-XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8-Delasticsearch -Des.path.home=/usr/local/elasticsearch-1.7.0

7261 sun.tools.jps.Jps -lvm-Denv.class.path=.:/usr/local/jdk1.8.0_111/lib/dt.jar:/usr/local/jdk1.8.0_111/lib/tools.jar-Dapplication.home=/usr/local/jdk1.8.0_111 -Xms8m

 

http://192.168.23.132:9200/

wKioL1hl-xCgekMBAACYoAyVkaQ750.jpg

 

 

安裝elasticsearch-servicewrapper

elasticsearch-servicewrapper這是對elasticsearch執行命令的包裝服務,安裝以後,方便elasticsearch的啓動,中止等操做;

[root@test4 ~]# kill -9 7177

[root@test4 ~]# jps

7281 Jps

[root@test4 ~]# git clone https://github.com/elastic/elasticsearch-servicewrapper.git

Initialized empty Git repository in/root/elasticsearch-servicewrapper/.git/

remote: Counting objects: 184, done.

remote: Total 184 (delta 0), reused 0(delta 0), pack-reused 184

Receiving objects: 100% (184/184), 4.32 MiB| 278 KiB/s, done.

Resolving deltas: 100% (70/70), done.

[root@test4 ~]# mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/

[root@test4 ~]# /usr/local/elasticsearch/bin/service/elasticsearch --help

Unexpected command: --help

Usage:/usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop |restart | condrestart | status | install | remove | dump ]

Commands:

 console      Launch in the currentconsole.

 start        Start in thebackground as a daemon process.

 stop         Stop if running as adaemon or in another console.

 restart      Stop if running andthen start.

 condrestart  Restart only ifalready running.

 status       Query the currentstatus.

  install      Install to start automatically whensystem boots.

 remove       Uninstall.

 dump         Request a Java threaddump if running.

[root@test4 ~]# /usr/local/elasticsearch/bin/service/elasticsearch install   #install to startautomatically when system boots

Detected RHEL or Fedora:

Installing the Elasticsearch daemon..

[root@test4 ~]# chkconfig --list elasticsearch

elasticsearch          0:off 1:off 2:on 3:on 4:on 5:on 6:off

[root@test4 ~]# vim /usr/local/elasticsearch/bin/service/elasticsearch.conf   #(經過此文件修改java環境配置等運行的一些信息,默認1024根據服務器配置更改,經驗證此處至少512不然應用不能啓動)

set.default.ES_HEAP_SIZE=512

[root@test4 ~]# /etc/init.d/elasticsearch start

Starting Elasticsearch...

Waiting for Elasticsearch.......

running: PID:7658

[root@test4 ~]# service elasticsearch status

Elasticsearch is running: PID:7658,Wrapper:STARTED, Java:STARTED

[root@test4 ~]# tail /usr/local/elasticsearch/logs/service.log

STATUS | wrapper  | 2016/12/16 00:55:23 | --> WrapperStarted as Daemon

STATUS | wrapper  | 2016/12/16 00:55:23 | Java Service WrapperCommunity Edition 64-bit 3.5.14

[root@test4 ~]# curl http://192.168.23.132:9200   #200爲正常)

{

 "status" : 200,

 "name" : "test4",

 "cluster_name" : "elasticsearch",

 "version" : {

   "number" : "1.7.0",

   "build_hash" :"929b9739cae115e73c346cb5f9a6f24ba735a743",

   "build_timestamp" : "2015-07-16T14:31:07Z",

   "build_snapshot" : false,

   "lucene_version" : "4.10.4"

  },

 "tagline" : "You Know, for Search"

}

 

 

安裝插件marvel

marvel付費的監控管理工具,可以使用其中的sense實現與elasticsearch友好交互;

Marvel is a management and monitoring tool for Elasticsearch which is free for development use. It comes with an interactive console called Sense which makes it very easy to talk to Elasticsearch directly from your browser. Many of the code examples in this book include a ``View in Sense'' link. When clicked, it will open up a working example of the code in the Sense console. You do not have to install Marvel,but it will make this book much more interactive by allowing you to experiment with the code samples on your local Elasticsearch cluster.

 

注:

You probably don't want Marvel to monitor your local cluster, so you can disable data collection with this command:

#echo 'marvel.agent.enabled: false' >> ./config/elasticsearch.yml

 

[root@test4 ~]# /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

-> Installingelasticsearch/marvel/latest...

Tryinghttp://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...

Downloading ................................................................................................................................................................................................................................................................................................DONE

Installed elasticsearch/marvel/latest into/usr/local/elasticsearch/plugins/marvel

 

http://192.168.23.132:9200/_plugin/marvel   #(點繼續試用)

wKioL1hl-1_jCjJpAACaFRVo89E457.jpg

Dashboard-->Sense-->Get to work-->clickto send request

wKioL1hl-23hGpbkAABZhZIhUhM464.jpg

POST /index-demo/test

{

   "user" : "jowin",

   "message" : "hello,world"

}

wKioL1hl-3uCHKk6AABPPi-6r20326.jpg

GET /index-demo/test/AVkaOa61M5l9MXX2iExN   #(粘貼上一條執行結果中的"_id"

wKiom1hl-4yzlSSSAABsul6fwbM605.jpg

GET/index-demo/test/AVkaOa61M5l9MXX2iExN/_source  #/_source只看文檔內容)

wKiom1hl-_jQ_m3lAABf2mywylI348.jpg

GET /index-demo/test/_search?q=hello   #(全文搜索)

 

wKiom1hl--aiUc26AAB5z308kKM326.jpg 

 

elasticsearch集羣:

test4192.168.23.132

test5192.168.23.133

在另外一nodetest5)上安裝elasticsearch、安裝elasticsearch-servicewrapper

 

test4

[root@test4 ~]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head   #(安裝集羣插件elasticsearch-head,管理集羣)

-> Installing mobz/elasticsearch-head...

Tryinghttps://github.com/mobz/elasticsearch-head/archive/master.zip...

Downloading..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE

Installed mobz/elasticsearch-head into/usr/local/elasticsearch/plugins/head

 

http://192.168.23.132:9200/_plugin/head/

wKiom1hl-9OBTrnjAABrvY9dXtw498.jpg

注:

.marvel-kibana爲默認索引,一個索引5個分片;

集羣健康值(green全部node都正常;yellow全部主分片正常,副本分片有丟失;red主分片有丟失);

 

[root@test4 ~]# curl-X GET 'http://192.168.23.132:9200/_cluster/health?pretty'

{

 "cluster_name" : "elasticsearch",

 "status" : "green",

 "timed_out" : false,

 "number_of_nodes" : 2,

 "number_of_data_nodes" : 2,

 "active_primary_shards" : 11,

 "active_shards" : 22,

 "relocating_shards" : 0,

 "initializing_shards" : 0,

 "unassigned_shards" : 0,

 "delayed_unassigned_shards" : 0,

 "number_of_pending_tasks" : 0,

 "number_of_in_flight_fetch" : 0

}

 

 

 

2、

logstash

wKiom1hl_B2gKzLmAAAjh05DeRE701.jpg

注:

https://www.elastic.co/guide/en/logstash/5.x/input-plugins.html   #input pluginbeatsfile

https://www.elastic.co/guide/en/logstash/5.x/output-plugins.html   #output plugins

https://www.elastic.co/guide/en/logstash/5.x/filter-plugins.html   #filter plugins

https://www.elastic.co/guide/en/logstash/5.x/codec-plugins.html   #codec plugins

 

wKiom1hl_37RTuAVAAAxCfiP8A4064.jpg

 

 

https://www.elastic.co/guide/en/logstash/5.x/configuration-file-structure.html  

structure of a config file

----------------------file-start------------------------------

# This is a comment. You should usecomments to describe

# parts of your configuration.

input {

  ...

}

 

filter {

  ...

}

 

output {

  ...

}

------------------------file-end--------------------------------

 

value types

arraylistsbooleanbytescodechashnumberpasswordpathstringcomments

arrayusers => [ {id=> 1, name => bob}, {id => 2, name => jane} ]);

lists

path => [ "/var/log/messages","/var/log/*.log" ]

uris => [ "http://elastic.co","http://example.net" ]

);

booleanssl_enable =>true);

bytes

my_bytes => "1113"   # 1113 bytes

my_bytes => "10MiB"  # 10485760 bytes

my_bytes => "100kib" # 102400bytes

my_bytes => "180 mb" #180000000 bytes

);

codeccodec =>"json");

hash

match => {

 "field1" => "value1"

 "field2" => "value2"

  ...

}

);

numberport => 33);

passwordmy_password =>"password");

urimy_uri =>"http://foo:bar@example.net");

pathmy_path =>"/tmp/logstash");

string

name => "Hello world"

name => 'It\'s a beautiful day'

);

comments

# this is a comment

input { # comments can appear at the end ofa line, too

  #...

}

);

 

 

安裝logstash前要有java環境;

https://www.elastic.co/guide/en/logstash/current/installing-logstash.html#installing-logstash   #(生產環境建議yum方式安裝)

https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm

https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.3-1.noarch.rpm

 

 

二進制方式安裝(logstash-1.5.3.tar.gz):

[root@test4 ~]# tar xf logstash-1.5.3.tar.gz -C /usr/local/

[root@test4 ~]# cd /usr/local                        

[root@test4 local]# ln -sv logstash-1.5.3/ logstash

`logstash' -> `logstash-1.5.3/'

[root@test4 ~]# /usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout {} }'   #(標準輸入,標準輸出)

hello

Logstash startup completed

2016-12-27T00:55:58.155Z test4 hello

world

2016-12-27T00:56:07.105Z test4 world

^CSIGINT received. Shutting down thepipeline. {:level=>:warn}

^CSIGINT received. Terminatingimmediately.. {:level=>:fatal}

[root@test4 ~]# /usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout { codec => rubydebug} }'   #rubydebug

hello

Logstash startup completed

{

      "message" => "hello",

     "@version" => "1",

   "@timestamp" => "2016-12-27T01:29:38.988Z",

         "host" => "test4"

}

^CSIGINT received. Shutting down thepipeline. {:level=>:warn}

^CSIGINT received. Terminatingimmediately.. {:level=>:fatal}

[root@test4 ~]# /usr/local/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { host =>"192.168.23.132" protocol => "http" } }'   #(輸出到elasticsearch

hello world

'[DEPRECATED] use `require 'concurrent'`instead of `require 'concurrent_ruby'`

Logstash startup completed

 

http://192.168.23.132:9200/_plugin/head/

wKiom1hl_5-Qmb13AACUeZaCP70252.jpg

注:

粗黑框是主分片,其它是分片副本;

wKioL1hl_7DxGKicAABkC6s_TCk739.jpg

基本查詢-->搜索(選logstash-->點搜索   #(日誌進了ES就可進行搜索)

 

 

[root@test5 ~]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head   #(在192.168.23.133上安裝管理集羣插件)

wKiom1hl_8vCMwetAACSBWoK8uE510.jpg

 

 

二進制方式安裝(logstash-1.5.3-1.noarch.rpm):

[root@test4 ~]# rm -rf /usr/local/logstash*

[root@test4 ~]# rpm -ivh logstash-1.5.3-1.noarch.rpm

Preparing...                ###########################################[100%]

  1:logstash              ########################################### [100%]

[root@test4 ~]# vim /etc/init.d/logstash

LS_USER=logstash

LS_GROUP=logstash

LS_HOME=/var/lib/logstash

LS_HEAP_SIZE="128m"

LS_LOG_DIR=/var/log/logstash

LS_LOG_FILE="${LS_LOG_DIR}/$name.log"

LS_CONF_DIR=/etc/logstash/conf.d

LS_OPEN_FILES=16384

LS_NICE=19

LS_OPTS=""

[root@test4 ~]# vim /etc/logstash/conf.d/logstash.conf

input {

       file {

                path => "/tmp/messages"

       }

}

 

output {

       file {

                path =>"/tmp/%{+YYYY-MM-dd}-messages.gz"

                gzip => true

       }

}

[root@test4 ~]# /etc/init.d/logstash start

logstash started.

[root@test4 ~]# /etc/init.d/logstash status

logstash is running

[root@test4 ~]# cat /var/log/maillog >> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# ll /tmp

total 196

-rw-r--r--. 1 logstash logstash   5887 Dec 28 19:28 2016-12-29-messages.gz

drwxr-xr-x. 2 logstash logstash   4096 Dec 28 19:27 hsperfdata_logstash

drwxr-xr-x. 2 root     root      4096 Dec 28 19:18 hsperfdata_root

drwxr-xr-x. 2 root     root      4096 Dec 26 17:54 jna-3506402

-rw-r--r--. 1 root     root    147340 Dec 28 19:28 messages

 

[root@test4 ~]# vim /etc/logstash/conf.d/logstash.conf

input {

       file {

                path =>"/tmp/messages"

       }

}

 

output {

       file {

                path =>"/tmp/%{+YYYY-MM-dd}-messages.gz"

                gzip => true

       }

        elasticsearch {

                host =>"192.168.23.132"

                protocol => "http"

                index =>"system-messages-%{+YYYY.MM.dd}"

        }

}

[root@test4 ~]# /etc/init.d/logstash restart

Killing logstash (pid 5921) with SIGTERM

Waiting logstash (pid 5921) to die...

Waiting logstash (pid 5921) to die...

logstash stopped.

logstash started.

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

 

http://192.168.23.132:9200/_plugin/head/

wKiom1hmADOSgZ1CAAClvmeFpao534.jpg

 

 

舉例:

node1test4192.168.23.132elasticsearchlogstash

node2test5192.168.23.133rediselasticsearchlogstash

前提:node1node2elasticsearch集羣

datasource-->logstash-->redis-->logstash-->elasticsearch

1input(file)-->logstash-->output(redis)

2input(redis)-->logstash-->output(elasticsearch)

 

注:生產上一個業務用一個redis0-1516個庫,默認0庫),例如:db0給系統日誌用,db1訪問日誌,db2錯誤日誌,db3mysql日誌等

 

 

1

test5node2)上:

[root@test5 ~]# yum -y install redis

[root@test5 ~]# vim /etc/redis.conf

bind 192.168.23.133

[root@test5 ~]# /etc/init.d/redis start

Starting redis-server:                                     [  OK  ]

[root@test5 ~]# redis-cli -h 192.168.23.133 -p 6379

redis 192.168.23.133:6379> info

……

redis 192.168.23.133:6379> keys *

(empty list or set)

redis 192.168.23.133:6379> select 1

OK

redis 192.168.23.133:6379[1]> keys *

(empty list or set)

 

test4node1)上:

[root@test4 ~]# vim /etc/logstash/conf.d/logstash.conf   #https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html

input {

       file {

                path =>"/tmp/messages"

       }

}

 

output {

       redis {

                data_type =>"list"

                key =>"system-messages"

                host =>"192.168.23.133"

                port => "6379"

                db => "1"

       }

}

[root@test4 ~]# /etc/init.d/logstash restart

Killing logstash (pid 5986) with SIGTERM

Waiting logstash (pid 5986) to die...

Waiting logstash (pid 5986) to die...

logstash stopped.

logstash started.

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

 

test5node2)上查看:

redis 192.168.23.133:6379[1]> keys *

1) "system-messages"

redis 192.168.23.133:6379[1]> llen system-messages

(integer) 328

redis 192.168.23.133:6379[1]> lindex system-messages -1

"{\"message\":\"Dec 2819:06:17 test4 dhclient[4900]: bound to 192.168.23.132 -- renewal in 899seconds.\",\"@version\":\"1\",\"@timestamp\":\"2016-12-29T08:49:10.183Z\",\"host\":\"test4\",\"path\":\"/tmp/messages\"}"

 

2

node2上安裝logstash

[root@test5 ~]# java -version

java version "1.8.0_111"

Java(TM) SE Runtime Environment (build1.8.0_111-b14)

Java HotSpot(TM) 64-Bit Server VM (build25.111-b14, mixed mode)

[root@test5 ~]# rpm -ivh logstash-1.5.3-1.noarch.rpm

Preparing...               ########################################### [100%]

  1:logstash              ########################################### [100%]

[root@test5 ~]# /etc/init.d/elasticsearchstatus

Elasticsearch is running: PID:4165,Wrapper:STARTED, Java:STARTED

[root@test5 ~]# vim /etc/init.d/logstash

LS_HEAP_SIZE="128m"

[root@test5 ~]# vim /etc/logstash/conf.d/logstash.conf

input {

       redis {

                data_type =>"list"

                key =>"system-messages"

                host =>"192.168.23.133"

                port => "6379"

               db => "1"

       }

}

 

output {

       elasticsearch {

               host =>"192.168.23.133"

                protocol => "http"

                index =>"system-redis-messages-%{+YYYY.MM.dd}"

       }

}

[root@test5 ~]# /etc/init.d/logstash start

logstash started.

 

node1上導入數據:

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

[root@test4 ~]# cat /var/log/maillog>> /tmp/messages

node2上查看redis中數據,已被elasticsearch拿走長度0

redis 192.168.23.133:6379[1]> llen system-messages

(integer) 0

redis 192.168.23.133:6379[1]> llen system-messages

(integer) 0

redis 192.168.23.133:6379[1]> llen system-messages

(integer) 0

 

http://192.168.23.132:9200/_plugin/head/

wKioL1hmAIuCb5KpAAC1DJJLUoI856.jpg

 

 

 

注:

codec pluginsjson

input-->decode-->filter-->encode-->output 

decodefilterencode   #codec

系統日誌syslog

訪問日誌(codec => json),若json格式不完整,logstash將不會收集,意味着要丟日誌,可在logstash日誌中查看json解析失敗,在線json解析,http://jsonlint.com/

錯誤日誌(codec =>multiline);

注:

codec => multiline

pattern => ".*\t.*"

運行日誌(codec => json);

其它日誌;

 

 

舉例:

nginx記錄日誌用json

[root@test4 ~]# yum -y install nginx

[root@test4 ~]# cp /etc/nginx/nginx.conf.default /etc/nginx/nginx.conf

cp: overwrite `/etc/nginx/nginx.conf'? y

[root@test4 ~]# vim /etc/nginx/nginx.conf

……

http {

……

    log_format logstash_json '{"@timestamp" : "$time_iso8601",'

                        '"host" :"$server_addr",'

                        '"client" :"$remote_addr",'

                        '"size" :$body_bytes_sent,'

                       '"responsetime" : $request_time,'

                        '"domain" :"$host",'

                        '"url" :"$uri",'

                        '"referer" :"$http_referer",'

                        '"agent" :"$http_user_agent",'

                        '"status" :"$status"}';

   server {

       listen       80;

       server_name  localhost;

……

        access_log  logs/access_json.log  logstash_json;

……

       }

……

}

[root@test4 ~]# mkdir /usr/share/nginx/logs

[root@test4 ~]# /etc/init.d/nginx configtest

nginx: the configuration file/etc/nginx/nginx.conf syntax is ok

nginx: configuration file/etc/nginx/nginx.conf test is successful

[root@test4 ~]# service nginx start

Starting nginx:                                           [  OK  ]

 

node1

[root@test4 ~]# vim /etc/logstash/conf.d/logstash.conf

input {

       file {

                path => "/usr/share/nginx/logs/access_json.log"

                codec => "json"

       }

}

 

output {

       redis {

                data_type => "list"

                key => "nginx-access-log"

                host => "192.168.23.133"

                port => "6379"

                db => "2"

       }

}

[root@test4 ~]# /etc/init.d/logstash restart

Killing logstash (pid 9034) with SIGTERM

Waiting logstash (pid 9034) to die...

Waiting logstash (pid 9034) to die...

Waiting logstash (pid 9034) to die...

logstash stopped.

logstash started.

 

node2

[root@test5 ~]# vim /etc/logstash/conf.d/logstash.conf

input {

       redis {

                data_type =>"list"

                key => "nginx-access-log"

                host => "192.168.23.133"

                port => "6379"

                db => "2"

       }

}

 

output {

       elasticsearch {

                host => "192.168.23.133"

                protocol => "http"

                index => "nginx-access-log-%{+YYYY.MM.dd}"

       }

}

[root@test5 ~]# /etc/init.d/logstash restart

Killing logstash (pid 3718) with SIGTERM

Waiting logstash (pid 3718) to die...

Waiting logstash (pid 3718) to die...

logstash stopped.

logstash started.

 

node1

[root@test4 ~]# ab -n 10000 -c 100 http://192.168.23.132:80/index.html

……

[root@test4 ~]# tail -5/usr/share/nginx/logs/access_json.log

{"@timestamp" :"2017-01-03T00:32:01-08:00","host" :"192.168.23.132","client" :"192.168.23.132","size" : 0,"responsetime" :0.000,"domain" : "192.168.23.132","url" :"/index.html","referer" : "-","agent" :"ApacheBench/2.3","status" : "200"}

{"@timestamp" :"2017-01-03T00:32:01-08:00","host" :"192.168.23.132","client" :"192.168.23.132","size" : 0,"responsetime" :0.000,"domain" : "192.168.23.132","url" :"/index.html","referer" : "-","agent" :"ApacheBench/2.3","status" : "200"}

{"@timestamp" :"2017-01-03T00:32:01-08:00","host" :"192.168.23.132","client" :"192.168.23.132","size" : 0,"responsetime" :0.000,"domain" : "192.168.23.132","url" :"/index.html","referer" : "-","agent" :"ApacheBench/2.3","status" : "200"}

{"@timestamp" : "2017-01-03T00:32:01-08:00","host": "192.168.23.132","client" :"192.168.23.132","size" : 0,"responsetime" :0.000,"domain" : "192.168.23.132","url" :"/index.html","referer" : "-","agent" :"ApacheBench/2.3","status" : "200"}

{"@timestamp" :"2017-01-03T00:32:01-08:00","host" :"192.168.23.132","client" :"192.168.23.132","size" : 0,"responsetime" :0.000,"domain" : "192.168.23.132","url" :"/index.html","referer" : "-","agent" :"ApacheBench/2.3","status" : "200"}

 

node2(已用yum方式安裝redis)

redis 192.168.23.133:6379[1]> select 2

OK

redis 192.168.23.133:6379[2]> keys *

1) "nginx-access-log"

……

redis 192.168.23.133:6379[2]> keys *   #elasticsearch取完後爲空)

(empty list or set)

 

http://192.168.23.132:9200/_plugin/head/

wKioL1hswMHDVt8oAAC_BkTCnN8399.jpg

「數據瀏覽」中

wKiom1hswNHSbUa9AACuTogumJw539.jpg

 

 

 

3、

kibana,搜索elasticsearch中的數據,並可視化展示出來

kibanaV1php);kibanaV2ruby);kibanaV3js);kibanaV4jruby-->nodejs);

 

[root@test4 ~]# tar xf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/

[root@test4 ~]# cd /usr/local

[root@test4 local]# ln -sv kibana-4.1.1-linux-x64/ kibana

`kibana' -> `kibana-4.1.1-linux-x64/'

[root@test4 local]# vim /usr/local/kibana/config/kibana.yml

elasticsearch_url: "http://192.168.23.132:9200"

[root@test4 local]# kibana/bin/kibana   #5601port,沒有報錯再放到後臺執行)

{"name":"Kibana","hostname":"test4","pid":9687,"level":30,"msg":"Noexisting kibana indexfound","time":"2017-01-03T09:00:42.763Z","v":0}

{"name":"Kibana","hostname":"test4","pid":9687,"level":30,"msg":"Listeningon0.0.0.0:5601","time":"2017-01-03T09:00:42.850Z","v":0}

 

[root@test4 local]# cd

[root@test4 ~]# nohup /usr/local/kibana/bin/kibana &

[1] 9710

[root@test4 ~]# nohup: ignoring input andappending output to `nohup.out'

 

[root@test4 ~]# netstat -tnulp | grep :5601

tcp       0      0 0.0.0.0:5601                0.0.0.0:*                   LISTEN     9710/node 

 

http://192.168.23.132:5601

wKioL1hswOrzgq9RAABrjwShTuA678.jpg

 

勾選index contains time-based eventsuse event times tocreate index names

index pattern intervaldaily

index name or pattern中輸入[nginx-access-log-]YYYY.MM.DD會自動匹配出elasticearch中的index,如圖中綠框中內容

refresh fields,選@timestamp

Create

wKiom1hswQDD2XW5AABknUYxmPs411.jpg

wKioL1hswQ6T7D9EAABoxt6Ogjk607.jpg

 

添加多個indexSettings-->Indices-->Add New,若要將某index做爲默認,點綠色五角星

wKiom1hswR3wPUSjAABHy3kdzNM907.jpg

 

Settings-->Advanced,設置kibana各參數

discover:sampleSize,默認顯示日誌500行,且以倒序顯示,最新的日誌最早顯示,爲加快顯示速度,可調整爲50

wKiom1hswaKit04rAACKKFk9GiI810.jpg

 

Discover

wKioL1hswa-CV0wzAABFqgUZmKU749.jpg

 

右上角點Last 15 minutesQuickRelative相對、Absolute,點This month

wKioL1hswdfx6U0PAAA8VuHdZ_U940.jpg

 

左側,Available Fields,依次點domainagentsizeurl,每一個上都點add

wKiom1hsweWh-FzmAACAMqK4KCQ703.jpg

wKioL1hswfLQutAdAAB35irZe3E492.jpg

 

search框內輸入status:200,看搜索框的右下角,12020 hits

wKiom1hswf6g_5ZXAAAf_MazXKs974.jpg

 

search框內輸入status:404,看搜索框右下角,0 hits

wKioL1hswg7gEL0TAAAoy4_tTjY258.jpg

 

[root@test4 ~]# ab -n 1000 -c 10 http://192.168.23.132/nimeimei.html

 

redis 192.168.23.133:6379[2]> keys *

1) "nginx-access-log"

redis 192.168.23.133:6379[2]> keys *

1) "nginx-access-log"

redis 192.168.23.133:6379[2]> keys *

(empty list or set)

 

wKiom1hswiKRKoHoAAB6Qno7DyM214.jpg

 

Available Fieldsaddstatus

search框中輸入,status:404

wKiom1hswjSR5wmrAACFQEJQ1Ow987.jpg

 

status:404 OR status:200,也可用ANDNOTTO

wKiom1hswkWBbwNMAAAhP_8y-Lg663.jpg

 

status:[400 TO 499]

wKiom1hswlPxKZbDAAA1wpqOCoE625.jpg

 

注:

status:200

status:404

status:200 OR status:404

status:[400 TO 499]

搜索框右側第二個、第三個按鈕分別爲Save SearchLoad Saved

 

Visualizearea chart, datatable, line chart, markdown widget, metric, pie chart, tile map, vertical barchart

wKiom1hswmOSGHb_AACM3RLvgRo650.jpg

 

 

if EXPRESSION {

         ……

} else if EXPRESSION {

         ……

} else {

         ……

}

舉例:

input {

         file {

                   type=> "apache"

                   path=> "/var/log/httpd/logs/access_log"

         }

         file {

                   type=> "php-error-log"

                   path=> "/var/log/php/php_errors.log"

         }

}

output {

         if [type] == "apache" {

                   redis{

                            host=> "192.168.23.133"

                            port=> "6379"

                            db=> "2"

                            data_type=> "list"

                            key=> "api-access-log-`HOSTNAME`"

                   }

         }

         if [type] == "php-error-log" {

                   redis{

                            host=> "192.168.23.133"

                            port=> "6379"

                            db=> "3"

                            data_type=> "list"

                            key=> "php-error-log"

                   }

         }

         if [type] == "api-run-log" {

                   redis{

                            host=> "192.168.23.133"

                            port=> "6379"

                            db=> "4"

                            data_type=> "list"

                            key=> "api-run-log"

                   }

         }

}

注:json中不能有type字段,不然logstash不能收集到

相關文章
相關標籤/搜索