Logstash + Elasticsearch + Kibana+Redis+Syslog-ng
ElasticSearch是一個基於Lucene構建的開源,分佈式,RESTful搜索引擎。設計用於雲計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便。支持通過HTTP使用JSON進行數據索引。
logstash是一個應用程序日誌、事件的傳輸、處理、管理和搜索的平臺。你可以用它來統一對應用程序日誌進行收集管理,提供 Web 接口用於查詢和統計。其實logstash是可以被別的替換,比如常見的fluented
Kibana是一個爲 Logstash 和 ElasticSearch 提供的日誌分析的 Web 接口。可使用它對日誌進行高效的搜索、可視化、分析等各種操作。
redis是一個高性能的內存key-value數據庫,非必需安裝,可以防止數據丟失.
參考::
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
http:
//www
.logstash.net/
http:
//chenlinux
.com
/2012/10/21/elasticearch-simple-usage/
http:
//www
.elasticsearch.cn
http:
//download
.oracle.com
/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64
.
tar
.gz?AuthParam=1408083909_3bf5b46169faab84d36cf74407132bba
http:
//curran
.blog.51cto.com
/2788306/1263416
http:
//storysky
.blog.51cto.com
/628458/1158707/
http:
//zhumeng8337797
.blog.163.com
/blog/static/10076891420142712316899/
http:
//enable
.blog.51cto.com
/747951/1049411
http:
//chenlinux
.com
/2014/06/11/nginx-access-log-to-elasticsearch/
http:
//www
.w3c.com.cn/%E5%BC%80%E6%BA%90%E5%88%86%E5%B8%83%E5%BC%8F%E6%90%9C%E7%B4%A2%E5%B9%B3%E5%8F%B0elkelasticsearchlogstashkibana%E5%85%A5%E9%97%A8%E5%AD%A6%E4%B9%A0%E8%B5%84%E6%BA%90%E7%B4%A2%E5%BC%95
http:
//woodygsd
.blogspot.com
/2014/06/an-adventure-with-elk-or-how-to-replace
.html
http:
//www
.ricardomartins.com.br
/enviando-dados-externos-para-a-stack-elk/
http:
//tinytub
.github.io
/logstash-install
.html
http:
//jamesmcfadden
.co.uk
/securing-elasticsearch-with-nginx/
https:
//github
.com
/elasticsearch/logstash/blob/master/patterns/grok-patterns
http:
//zhaoyanblog
.com
/archives/319
.html
http:
//www
.vpsee.com
/2014/05/install-and-play-with-elasticsearch/
|
IP說明:
118.x.x.x/16 爲客戶端ip
192.168.0.39和61.x.x.x爲ELK的內網和外網ip
1
2
3
4
5
6
|
#https://www.reucon.com/cdn/java/jdk-7u67-linux-x64.tar.gz
#tar zxvf jdk-7u67-linux-x64.tar.gz
#mv jdk1.7.0_67 /usr/local/
#cd /usr/local/
#ln -s jdk1.7.0_67 jdk
#chown -R root:root jdk/
|
配置環境變量
1
2
3
4
5
6
7
8
|
vim
/etc/profile
export
JAVA_HOME=
/usr/local/jdk
export
JRE_HOME=$JAVA_HOME
/jre
export
CLASSPATH=.:$JAVA_HOME
/lib/dt
.jar:$JAVA_HOME
/lib/tools
.jar:$JRE_HOME
/lib
:$CLASSPATH
export
PATH=$JAVA_HOME
/bin
:$PATH
export
REDIS_HOME=
/usr/local/redis
export
ES_HOME=
/usr/local/elasticsearch
export
ES_CLASSPATH=$ES_HOME
/config
|
變量生效:
1
|
source
/etc/profile
|
驗證版本:
1
2
3
4
5
6
7
|
#java -version
java version
"1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
#如果之前安裝過java,可以先卸載
#rpm -qa |grep javajava-1.6.0-openjdk-1.6.0.0-1.24.1.10.4.el5java-1.6.0-openjdk-devel-1.6.0.0-1.24.1.10.4.el5
#rpm -e java-1.6.0-openjdk-1.6.0.0-1.24.1.10.4.el5 java-1.6.0-openjdk-devel-1.6.0.0-1.24.1.10.4.el5
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
#wget http://download.redis.io/releases/redis-2.6.17.tar.gz
#tar zxvf redis-2.6.17.tar.gz
#mv redis-2.6.17 /usr/local/
#cd /usr/local
#ln -s redis-2.6.17 redis
#cd /usr/local/redis
#make
#make install
#cd utils
#./install_server.sh
Please
select
the redis port
for
this instance: [6379]
Selecting default: 6379
Please
select
the redis config
file
name [
/etc/redis/6379
.conf]
Selected default -
/etc/redis/6379
.conf
Please
select
the redis log
file
name [
/var/log/redis_6379
.log]
Selected default -
/var/log/redis_6379
.log
Please
select
the data directory
for
this instance [
/var/lib/redis/6379
]
Selected default -
/var/lib/redis/6379
Please
select
the redis executable path [
/usr/local/bin/redis-server
]
|
編輯配置文件
1
2
3
4
5
|
vim
/etc/redis/6379
.conf
daemonize
yes
port 6379
timeout 300
tcp-keepalive 60
|
啓動
1
2
3
4
|
/etc/init
.d
/redis_6379
start
exists, process is already running or crashed
如報這個錯,需要編輯下
/etc/init
.d
/redis_6379
,去除頭上的\n
加入自動啓動chkconfig –add redis_6379
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
http:
//www
.elasticsearch.org/
http:
//www
.elasticsearch.cn
集羣安裝只要節點在同一網段下,設置一致的cluster.name,啓動的Elasticsearch即可相互檢測到對方,組成集羣
#wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.tar.gz
#tar zxvf elasticsearch-1.3.2.tar.gz
#mv elasticsearch-1.3.2 /usr/local/
#cd /usr/local/
#ln -s elasticsearch-1.3.2 elasticsearch
#elasticsearch/bin/elasticsearch -f
[2014-08-20 13:19:05,710][INFO ][node ] [Jackpot] version[1.3.2], pid[19320], build[dee175d
/2014-08-13T14
:29:30Z]
[2014-08-20 13:19:05,727][INFO ][node ] [Jackpot] initializing ...
[2014-08-20 13:19:05,735][INFO ][plugins ] [Jackpot] loaded [], sites []
[2014-08-20 13:19:10,722][INFO ][node ] [Jackpot] initialized
[2014-08-20 13:19:10,723][INFO ][node ] [Jackpot] starting ...
[2014-08-20 13:19:10,934][INFO ][transport ] [Jackpot] bound_address {inet[
/0
.0.0.0:9301]}, publish_address {inet[
/61
.x.x.x:9301]}
[2014-08-20 13:19:10,958][INFO ][discovery ] [Jackpot] elasticsearch
/5hUOX-2ES82s_0zvI9BUdg
[2014-08-20 13:19:14,011][INFO ][cluster.service ] [Jackpot] new_master [Jackpot][5hUOX-2ES82s_0zvI9BUdg][Impala][inet[
/61
.x.x.x:9301]], reason: zen-disco-
join
(elected_as_master)
[2014-08-20 13:19:14,060][INFO ][http ] [Jackpot] bound_address {inet[
/0
.0.0.0:9201]}, publish_address {inet[
/61
.x.x.x:9201]}
[2014-08-20 13:19:14,061][INFO ][node ] [Jackpot] started
[2014-08-20 13:19:14,106][INFO ][gateway ] [Jackpot] recovered [0] indices into cluster_state
[2014-08-20 13:20:58,273][INFO ][node ] [Jackpot] stopping ...
[2014-08-20 13:20:58,323][INFO ][node ] [Jackpot] stopped
[2014-08-20 13:20:58,323][INFO ][node ] [Jackpot] closing ...
[2014-08-20 13:20:58,332][INFO ][node ] [Jackpot] closed
ctrl+c退出
|
以後臺方式運行
1
|
elasticsearch
/bin/elasticsearch
-d
|
訪問默認的9200端口
1
2
3
4
5
6
7
8
9
10
11
12
13
|
curl -X GET http:
//localhost
:9200
{
"status"
: 200,
"name"
:
"Steve Rogers"
,
"version"
: {
"number"
:
"1.3.2"
,
"build_hash"
:
"dee175dbe2f254f3f26992f5d7591939aaefd12f"
,
"build_timestamp"
:
"2014-08-13T14:29:30Z"
,
"build_snapshot"
:
false
,
"lucene_version"
:
"4.9"
},
"tagline"
:
"You Know, for Search"
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
|
http:
//logstash
.net/
#wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
#tar zxvf logstash-1.4.2.tar.gz
#mv logstash-1.4.2 /usr/local
#cd /usr/local
#ln -s logstash-1.4.2 logstash
#mkdir logstash/conf
#chown -R root:root logstash
#因爲java的默認heap size,回收機制等原因,logstash從1.4.0開始不再使用jar運行方式.
#以前方式:java -jar logstash-1.3.3-flatjar.jar agent -f logstash.conf
#現在方式:bin/logstash agent -f logstash.conf
#logstash下載即可使用,命令行參數可以參考logstash flags,主要有http://logstash.net/docs/1.2.1/flags
|
1
2
3
4
5
6
|
logstash的最新版已經內置kibana,你也可以單獨部署kibana。kibana3是純粹JavaScript+html的客戶端,所以可以部署到任意http服務器上。
http:
//www
.elasticsearch.org
/overview/elkdownloads/
#wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
#tar zxvf kibana-3.1.0.tar.gz
#mv kibana-3.1.0 /opt/htdocs/www/kibana
#vim /opt/htdocs/www/kibana/config.js
|
配置elasticsearch源
1
|
elasticsearch: 「http:
//
」+window.location.
hostname
+」:9200″,
|
加入iptables
1
2
|
6379爲redis端口,9200爲elasticsearch端口,118.x.x.x
/16
爲當前測試時的客戶端ip
iptables -A INPUT -p tcp -m tcp -s 118.x.x.x
/16
--dport 9200 --j ACCEPT
|
測試運行前端輸出
1
|
bin
/logstash
-e ‘input { stdin { } } output { stdout {} }’
|
輸入hello測試
1
|
2014-08-20T05:17:02.876+0000 Impala hello
|
測試運行輸出到後端
1
|
bin
/logstash
-e ‘input { stdin { } } output { elasticsearch { host => localhost } }’
|
訪問kibana
1
2
3
4
|
http:
//adminimpala
.campusapply.com
/kibana/index
.html
#/dashboard/file/default.json
Yes- Great! We have a prebuilt dashboard: (Logstash Dashboard).
See the note to the right about making it your global default
No results There were no results because no indices were found that match your selected
time
span
|
設置kibana讀取源
1
2
3
|
在kibana的右上角有個 configure dashboard,再進入Index Settings
[logstash-]YYYY.MM.DD
這個需和logstash的輸出保持一致
|
elasticsearch 跟 MySQL 中定義資料格式的角色關係對照表如下
1
2
|
MySQL elasticsearchdatabase indextable
type
table schema mappingrow documentfield field
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
|
syslog-ng.conf
#省略其它內容
# Remote logging syslog
source
s_remote {
udp(ip(192.168.0.39) port(514));
};
#nginx log
source
s_remotetcp {
tcp(ip(192.168.0.39) port(514) log_fetch_limit(100) log_iw_size(50000) max-connections(50) );
};
filter f_filter12 { program(
'c1gstudio\.com'
); };
#logstash syslog
destination d_logstash_syslog { udp(
"localhost"
port(10999) localport(10998) ); };
#logstash web
destination d_logstash_web { tcp(
"localhost"
port(10997) localport(10996) ); };
log {
source
(s_remote); destination(d_logstash_syslog); };
log {
source
(s_remotetcp); filter(f_filter12); destination(d_logstash_web); };
logstash_syslog.conf
input {
udp {
port => 10999
type
=> syslog
}
}
filter {
if
[
type
] ==
"syslog"
{
grok {
match => {
"message"
=>
"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
}
add_field => [
"received_at"
,
"%{@timestamp}"
]
add_field => [
"received_from"
,
"%{host}"
]
}
syslog_pri { }
date
{
match => [
"syslog_timestamp"
,
"MMM d HH:mm:ss"
,
"MMM dd HH:mm:ss"
]
}
}
}
output {
elasticsearch {
host => localhost
index =>
"syslog-%{+YYYY}"
}
}
logstash_redis.conf
input {
tcp {
port => 10997
type
=> web
}
}
filter {
grok {
match => [
"message"
,
"%{SYSLOGTIMESTAMP:syslog_timestamp} (?:%{SYSLOGFACILITY:syslog_facility} )?%{SYSLOGHOST:syslog_source} %{PROG:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{IPORHOST:clientip} - (?:%{USER:remote_user}|-) \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:status} (?:%{NUMBER:body_bytes_sent}|-) \"(?:%{URI:http_referer}|-)\" %{QS:agent} (?:%{IPV4:http_x_forwarded_for}|-)"
]
remove_field => [
'@version '
,
'host'
,
'syslog_timestamp'
,
'syslog_facility'
,
'syslog_pid'
]
}
date
{
match => [
"timestamp"
,
"dd/MMM/yyyy:HH:mm:ss Z"
]
}
useragent {
source
=>
"agent"
prefix =>
"useragent_"
remove_field => [
"useragent_device"
,
"useragent_major"
,
"useragent_minor"
,
"useragent_patch"
,
"useragent_os"
,
"useragent_os_major"
,
"useragent_os_minor"
]
}
geoip {
source
=>
"clientip"
fields => [
"country_name"
,
"region_name"
,
"city_name"
,
"real_region_name"
,
"latitude"
,
"longitude"
]
remove_field => [
"[geoip][longitude]"
,
"[geoip][latitude]"
,
"location"
,
"region_name"
]
}
}
output {
#stdout { codec => rubydebug }
redis {
batch =>
true
batch_events => 500
batch_timeout => 5
host =>
"127.0.0.1"
data_type =>
"list"
key =>
"logstash:web"
workers => 2
}
}
logstash_web.conf
input {
redis {
host =>
"127.0.0.1"
port =>
"6379"
key =>
"logstash:web"
data_type =>
"list"
codec =>
"json"
type
=>
"web"
}
}
output {
elasticsearch {
flush_size => 5000
host => localhost
idle_flush_time => 10
index =>
"web-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
}
|
啓動elasticsearch和logstash
1
2
3
4
|
/usr/local/elasticsearch/bin/elasticsearch
-d
/usr/local/logstash/bin/logstash
agent -f
/usr/local/logstash/conf/logstash_syslog
.conf &
/usr/local/logstash/bin/logstash
agent -f
/usr/local/logstash/conf/logstash_redis
.conf &
/usr/local/logstash/bin/logstash
agent -f
/usr/local/logstash/conf/logstash_web
.conf &
|
關閉
1
2
|
ps
aux|
egrep
‘search|logstash’
kill
pid
|
安裝控制器elasticsearch-servicewrapper
1
2
3
4
5
6
|
如果是在服務器上就可以使用elasticsearch-servicewrapper這個es插件,它支持通過參數,指定是在後臺或前臺運行es,並且支持啓動,停止,重啓es服務(默認es腳本只能通過ctrl+c關閉es)。
使用方法是到https:
//github
.com
/elasticsearch/elasticsearch-servicewrapper
下載service文件夾,放到es的bin目錄下。下面是命令集合:
bin
/service/elasticsearch
+console
在前臺運行esstart 在後臺運行esstop 停止esinstall 使es作爲服務在服務器啓動時自動啓動remove 取消啓動時自動啓動
vim
/usr/local/elasticsearch/service/elasticsearch
.conf
set
.default.ES_HOME=
/usr/local/elasticsearch
|
命令示例
查看狀態
1
|
http:
//61
.x.x.x:9200
/_status
?pretty=
true
|
集羣健康查看
1
2
|
http:
//61
.x.x.x:9200
/_cat/health
?
v
epoch timestamp cluster status node.total node.data shards pri relo init unassign1409021531 10:52:11 elasticsearch yellow 2 1 20 20 0 0 20
|
列出集羣索引
1
2
|
http:
//61
.x.x.x:9200
/_cat/indices
?
v
health index pri rep docs.count docs.deleted store.size pri.store.sizeyellow web-2014.08.25 5 1 5990946 0 3.6gb 3.6gbyellow kibana-int 5 1 2 0 20.7kb 20.7kbyellow syslog-2014 5 1 709 0 585.6kb 585.6kbyellow web-2014.08.26 5 1 1060326 0 712mb 712mb
|
刪除索引
1
2
|
curl -XDELETE ‘http:
//localhost
:9200
/kibana-int/
’
curl -XDELETE ‘http:
//localhost
:9200
/logstash-2014
.08.*’
|
優化索引
1
|
$ curl -XPOST ‘http:
//localhost
:9200
/old-index-name/_optimize
’
|
查看日誌
1
2
3
4
5
6
|
tail
/usr/local/elasticsearch/logs/elasticsearch
.log
2.4mb]->[2.4mb]/[273mb]}{[survivor] [3.6mb]->[34.1mb]/[34.1mb]}{[old] [79.7mb]->[80mb]/[682.6mb]}
[2014-08-26 10:37:14,953][WARN ][monitor.jvm ] [Red Shift] [gc][young][71044][54078] duration [43s], collections [1]/[46.1s], total [43s]/[26.5m], memory [384.7mb]->[123mb]/[989.8mb], all_pools {[young] [270.5mb]->[1.3mb]/[273mb]}{[survivor] [34.1mb]->[22.3mb]/[34.1mb]}{[old] [80mb]->[99.4mb]/[682.6mb]}
[2014-08-26 10:38:03,619][WARN ][monitor.jvm ] [Red Shift] [gc][young][71082][54080] duration [6.6s], collections [1]/[9.1s], total [6.6s]/[26.6m], memory [345.4mb]->[142.1mb]/[989.8mb], all_pools {[young] [224.2mb]->[2.8mb]/[273mb]}{[survivor] [21.8mb]->[34.1mb]/[34.1mb]}{[old] [99.4mb]->[105.1mb]/[682.6mb]}
[2014-08-26 10:38:10,109][INFO ][cluster.service ] [Red Shift] removed {[logstash-Impala-26670-2010][av8JOuEoR_iK7ZO0UaltqQ][Impala][inet[
/61
.x.x.x:9302]]{client=
true
, data=
false
},}, reason: zen-disco-node_failed([logstash-Impala-26670-2010][av8JOuEoR_iK7ZO0UaltqQ][Impala][inet[
/61
.x.x.x:9302]]{client=
true
, data=
false
}), reason transport disconnected (with verified connect)
[2014-08-26 10:39:37,899][WARN ][monitor.jvm ] [Red Shift] [gc][young][71171][54081] duration [3.4s], collections [1]/[4s], total [3.4s]/[26.6m], memory [411.7mb]->[139.5mb]/[989.8mb], all_pools {[young] [272.4mb]->[1.5mb]/[273mb]}{[survivor] [34.1mb]->[29.1mb]/[34.1mb]}{[old] [105.1mb]->[109mb]/[682.6mb]}
|
安裝bigdesk
1
2
3
4
5
6
7
|
要想知道整個插件的列表,請訪問http:
//www
.elasticsearch.org
/guide/reference/modules/plugins/
插件還是很多的,個人認爲比較值得關注的有以下幾個,其他的看你需求,比如你要導入數據當然就得關注river了。
該插件可以查看集羣的jvm信息,磁盤IO,索引創建刪除信息等,適合查找系統瓶頸,監控集羣狀態等,可以執行如下命令進行安裝,或者訪問項目地址:https:
//github
.com
/lukas-vlcek/bigdesk
bin
/plugin
-
install
lukas-vlcek
/bigdesk
Downloading .........................................................................................................................................................................................................................................................DONE
|