服務器配置:Centos6.6 x86_64 CPU:1核心 MEM:2G (作實驗,配置比較低一些)html
注:這裏配置elasticsearch集羣用了3臺服務器,能夠根據本身的實際狀況進行調整。java
注:這裏使用yum安裝,若是須要較高版本的,可使用編譯安裝。node
在10.0.18.144上操做,10.0.18.145配置方式和144是同樣的。mysql
一、安裝nginxlinux
配置yum源並安裝nginxnginx
1
2
3
4
5
6
7
8
9
10
11
|
#vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http:
//nginx
.org
/packages/centos/
$releasever/$basearch/
gpgcheck=0
enabled=1
安裝
#yum install nginx -y
查看版本
#rpm -qa nginx
nginx-1.10.1-1.el6.ngx.x86_64
|
修改nginx配置文件,修改成以下:c++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
|
user nginx;
worker_processes 1;
error_log
/var/log/nginx/error
.log notice;
#默認是warn
pid
/var/run/nginx
.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application
/octet-stream
;
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for $request_length $msec $connection_requests $request_time'
;
##添加了$request_length $msec $connection_requests $request_time
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
access_log
/var/log/nginx/access
.log main;
location / {
root
/usr/share/nginx/html
;
index index.html index.htm;
}
error_page 500 502 503 504
/50x
.html;
location =
/50x
.html {
root
/usr/share/nginx/html
;
}
}
}
修改nginx默認頁面
#vi /usr/share/nginx/html/index.html
<body>
<h1>Welcome to nginx!<
/h1
>
改成
<body>
<h1>Welcome to nginx! 144<
/h1
>
|
啓動nginx,並訪問測試:git
1
2
3
4
5
6
7
8
9
10
11
12
|
#service nginx start
#chkconfig --add nginx
#chkconfig nginx on
查看啓動狀況
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID
/Program
name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1023
/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1101
/master
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1353
/nginx
tcp 0 0 :::22 :::* LISTEN 1023
/sshd
tcp 0 0 ::1:25 :::* LISTEN 1101
/master
|
在瀏覽器訪問測試,以下:github
二、安裝配置java環境web
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
直接使用rpm包安裝,比較方便
#rpm -ivh jdk-8u92-linux-x64.rpm
Preparing...
########################################### [100%]
1:jdk1.8.0_92
########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
#java -version
java version
"1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
|
三、安裝配置logstash
配置logstash的yum源,以下:
1
2
3
4
5
6
7
8
9
10
11
12
|
#vim /etc/yum.repos.d/logstash.repo
[logstash-2.3]
name=Logstash repository
for
2.3.x packages
baseurl=https:
//packages
.elastic.co
/logstash/2
.3
/centos
gpgcheck=1
gpgkey=https:
//packages
.elastic.co
/GPG-KEY-elasticsearch
enabled=1
安裝logstash
#yum install logstash -y
查看版本
#rpm -qa logstash
logstash-2.3.4-1.noarch
|
配置logstash的配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
#cd /etc/logstash/conf.d
#vim logstash.conf
input {
file
{
path => [
"/var/log/nginx/access.log"
]
type
=>
"nginx_log"
start_position =>
"beginning"
}
}
output {
stdout {
codec => rubydebug
}
}
檢測語法是否有錯
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --configtest
Configuration OK
#語法OK
|
啓動並查看收集nginx日誌狀況:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
#列出一部分
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
Settings: Default pipeline workers: 1
Pipeline main started
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:15:30:18 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)\" \"-\" 415 1472196618.085 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T07:30:32.699Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:15:30:18 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)\" \"-\" 415 1472196618.374 2 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T07:30:32.848Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
………………
PS:在網上看到其餘版本logstash的pipeline workers是默認爲4,但我安裝的2.3.4版本這個默認值爲1
這是由於這個默認值和服務器自己的cpu核數有關,我這裏的服務器cpu都是1核,故默認值爲1。
能夠經過
/opt/logstash/bin/logstash
-h 命令查看一些參數
|
修改logstash的配置文件,將日誌數據輸出到redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
#cat /etc/logstash/conf.d/logstash.conf
input {
file
{
path => [
"/var/log/nginx/access.log"
]
type
=>
"nginx_log"
start_position =>
"beginning"
}
}
output {
redis {
host =>
"10.0.18.146"
key =>
'logstash-redis'
data_type =>
'list'
}
}
|
檢查語法並啓動服務
1
2
3
4
5
6
7
8
|
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --configtest
Configuration OK
#service logstash start
logstash started.
查看啓動進程
#ps -ef | grep logstash
logstash 2029 1 72 15:37 pts
/0
00:00:18
/usr/bin/java
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=
true
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=
/var/lib/logstash
-Xmx1g -Xss2048k -Djffi.boot.library.path=
/opt/logstash/vendor/jruby/lib/jni
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=
true
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=
/var/lib/logstash
-XX:HeapDumpPath=
/opt/logstash/heapdump
.hprof -Xbootclasspath
/a
:
/opt/logstash/vendor/jruby/lib/jruby
.jar -classpath : -Djruby.home=
/opt/logstash/vendor/jruby
-Djruby.lib=
/opt/logstash/vendor/jruby/lib
-Djruby.script=jruby -Djruby.shell=
/bin/sh
org.jruby.Main --1.9
/opt/logstash/lib/bootstrap/environment
.rb logstash
/runner
.rb agent -f
/etc/logstash/conf
.d -l
/var/log/logstash/logstash
.log
root 2076 1145 0 15:37 pts
/0
00:00:00
grep
logstash
|
下載並安裝redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
#yum install wget gcc gcc-c++ -y #安裝過的,就不須要再安裝了
#wget http://download.redis.io/releases/redis-3.0.7.tar.gz
#tar xf redis-3.0.7.tar.gz
#cd redis-3.0.7
#make
make
沒問題以後,建立目錄
#mkdir -p /usr/local/redis/{conf,bin}
#cp ./*.conf /usr/local/redis/conf/
#cp runtest* /usr/local/redis/
#cd utils/
#cp mkrelease.sh /usr/local/redis/bin/
#cd ../src
#cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server redis-trib.rb /usr/local/redis/bin/
建立redis數據存儲目錄
#mkdir -pv /data/redis/db
#mkdir -pv /data/log/redis
|
修改redis配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
#cd /usr/local/redis/conf
#vi redis.conf
dir
./ 修改成
dir
/data/redis/db/
保存退出
啓動redis
#nohup /usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf &
查看redis進程
#ps -ef | grep redis
root 4425 1149 0 16:21 pts
/0
00:00:00
/usr/local/redis/bin/redis-server
*:6379
root 4435 1149 0 16:22 pts
/0
00:00:00
grep
redis
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID
/Program
name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1402
/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1103
/master
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 4425
/redis-server
*
tcp 0 0 :::22 :::* LISTEN 1402
/sshd
tcp 0 0 ::1:25 :::* LISTEN 1103
/master
tcp 0 0 :::6379 :::* LISTEN 4425
/redis-server
*
|
一、安裝jdk
1
2
3
4
5
6
7
8
9
10
11
12
|
#rpm -ivh jdk-8u92-linux-x64.rpm
Preparing...
########################################### [100%]
1:jdk1.8.0_92
########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
|
二、安裝logstash
1
2
3
4
5
6
7
8
9
10
|
配置yum源
#vim /etc/yum.repos.d/logstash.repo
[logstash-2.3]
name=Logstash repository
for
2.3.x packages
baseurl=https:
//packages
.elastic.co
/logstash/2
.3
/centos
gpgcheck=1
gpgkey=https:
//packages
.elastic.co
/GPG-KEY-elasticsearch
enabled=1
安裝logstash
#yum install logstash -y
|
配置logstash server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
配置文件以下:
#cd /etc/logstash/conf.d
#vim logstash_server.conf
input {
redis {
port =>
"6379"
host =>
"10.0.18.146"
data_type =>
"list"
key =>
"logstash-redis"
type
=>
"redis-input"
}
}
output {
stdout {
codec => rubydebug
}
}
檢查語法
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf --configtest
Configuration OK
|
語法沒問題以後,測試查看收集nginx日誌的狀況,以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf
Settings: Default pipeline workers: 1
Pipeline main started
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:15:42:01 +0800] \"GET /favicon.ico HTTP/1.1\" 404 571 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36\" \"-\" 263 1472197321.350 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:45:25.214Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:16:40:53 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 374 1472200853.324 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:45:25.331Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:16:40:53 +0800] \"GET /favicon.ico HTTP/1.1\" 404 571 \"http://10.0.18.144/\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 314 1472200853.486 2 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:45:25.332Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:16:42:05 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 481 1472200925.259 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:45:25.332Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.9 - - [26/Aug/2016:16:47:35 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko\" \"-\" 298 1472201255.813 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:47:36.623Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.9 - - [26/Aug/2016:16:47:42 +0800] \"GET /favicon.ico HTTP/1.1\" 404 169 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko\" \"-\" 220 1472201262.653 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:47:43.649Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
{
"message"
=>
"10.0.90.8 - - [26/Aug/2016:16:48:09 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; BIDUBrowser 8.4)\" \"-\" 237 1472201289.662 1 0.000"
,
"@version"
=>
"1"
,
"@timestamp"
=>
"2016-08-26T08:48:09.684Z"
,
"path"
=>
"/var/log/nginx/access.log"
,
"host"
=>
"0.0.0.0"
,
"type"
=>
"nginx_log"
}
…………………………
|
注:執行此命令以後不會當即有信息顯示,須要等一會,也能夠在瀏覽器刷新144和145的nginx頁面或者同一網段的其餘機器訪問14四、145,就會由如上信息出現。
三、修改logstash配置文件,將蒐集到的數據輸出到ES集羣中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
#vim /etc/logstash/conf.d/logstash_server.conf
input {
redis {
port =>
"6379"
host =>
"10.0.18.146"
data_type =>
"list"
key =>
"logstash-redis"
type
=>
"redis-input"
}
}
output {
elasticsearch {
hosts =>
"10.0.18.149"
#其中一臺ES 服務器
index =>
"nginx-log-%{+YYYY.MM.dd}"
#定義的索引名稱,後面會用到
}
}
啓動logstash
#service logstash start
logstash started.
查看logstash server 進程
#ps -ef | grep logstash
logstash 1740 1 24 17:24 pts
/0
00:00:25
/usr/bin/java
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=
true
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=
/var/lib/logstash
-Xmx1g -Xss2048k -Djffi.boot.library.path=
/opt/logstash/vendor/jruby/lib/jni
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=
true
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=
/var/lib/logstash
-XX:HeapDumpPath=
/opt/logstash/heapdump
.hprof -Xbootclasspath
/a
:
/opt/logstash/vendor/jruby/lib/jruby
.jar -classpath : -Djruby.home=
/opt/logstash/vendor/jruby
-Djruby.lib=
/opt/logstash/vendor/jruby/lib
-Djruby.script=jruby -Djruby.shell=
/bin/sh
org.jruby.Main --1.9
/opt/logstash/lib/bootstrap/environment
.rb logstash
/runner
.rb agent -f
/etc/logstash/conf
.d -l
/var/log/logstash/logstash
.log
root 1783 1147 0 17:25 pts
/0
00:00:00
grep
logstash
|
在10.0.18.14八、10.0.18.14九、10.0.18.150三臺ES上安裝jdk和Elasticsearch!jdk的安裝都是同樣的,這裏不作贅述。
一、添加elasticsearch用戶,由於Elasticsearch服務器啓動的時候,須要在普通用戶權限下來啓動。
1
2
3
4
5
6
7
|
#adduser elasticsearch
#passwd elasticsearch #爲用戶設置密碼
#su - elasticsearch
下載Elasticsearch包
$wget https:
//download
.elastic.co
/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2
.3.4
/elasticsearch-2
.3.4.
tar
.gz
$
tar
xf elasticsearch-2.3.4.
tar
.gz
$
cd
elasticsearch-2.3.4
|
將elasticsearch的配置文件末尾添加以下:
1
2
3
4
5
6
7
8
9
|
#vim conf/elasticsearch.yml
cluster.name: serverlog
#集羣名稱,能夠自定義
node.name: node-1
#節點名稱,也能夠自定義
path.data:
/home/elasticsearch/elasticsearch-2
.3.4
/data
#data存儲路徑
path.logs:
/home/elasticsearch/elasticsearch-2
.3.4
/logs
#log存儲路徑
network.host: 10.0.18.148
#節點ip
http.port: 9200
#節點端口
discovery.zen.
ping
.unicast.hosts: [
"10.0.18.149"
,
"10.0.18.150"
]
#集羣ip列表
discovery.zen.minimum_master_nodes: 3
#集羣幾點數
|
啓動服務
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
$
cd
elasticsearch-2.3.4
$.
/bin/elasticsearch
-d
查看進程
$
ps
-ef |
grep
elasticsearch
root 1550 1147 0 17:44 pts
/0
00:00:00
su
- elasticsearch
500 1592 1 4 17:56 pts
/0
00:00:13
/usr/bin/java
-Xms256m -Xmx1g -Djava.awt.headless=
true
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=
true
-Des.path.home=
/home/elasticsearch/elasticsearch-2
.3.4 -
cp
/home/elasticsearch/elasticsearch-2
.3.4
/lib/elasticsearch-2
.3.4.jar:
/home/elasticsearch/elasticsearch-2
.3.4
/lib/
* org.elasticsearch.bootstrap.Elasticsearch start -d
500 1649 1551 0 18:00 pts
/0
00:00:00
grep
elasticsearch
查看端口
$
netstat
-tunlp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID
/Program
name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 ::ffff:10.0.18.148:9300 :::* LISTEN 1592
/java
tcp 0 0 :::22 :::* LISTEN -
tcp 0 0 ::1:25 :::* LISTEN -
tcp 0 0 ::ffff:10.0.18.148:9200 :::* LISTEN 1592
/java
|
啓動連個端口:9200集羣之間事務通訊,9300集羣之間選舉通訊。
啓動以後,查看三臺Elasticsearch的日誌,會看到「選舉」產生的master節點
第一臺:10.0.18.148
1
2
3
4
5
6
7
8
9
10
11
12
|
$
tail
-f logs
/serverlog
.log
…………………………
[2016-08-26 17:56:05,771][INFO ][
env
] [node-1] heap size [1015.6mb], compressed ordinary object pointers [
true
]
[2016-08-26 17:56:05,774][WARN ][
env
] [node-1] max
file
descriptors [4096]
for
elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:56:09,416][INFO ][node ] [node-1] initialized
[2016-08-26 17:56:09,416][INFO ][node ] [node-1] starting ...
[2016-08-26 17:56:09,594][INFO ][transport ] [node-1] publish_address {10.0.18.148:9300}, bound_addresses {10.0.18.148:9300}
[2016-08-26 17:56:09,611][INFO ][discovery ] [node-1] serverlog
/py6UOr4rRCCuK3KjA-Aj-Q
[2016-08-26 17:56:39,622][WARN ][discovery ] [node-1] waited
for
30s and no initial state was
set
by the discovery
[2016-08-26 17:56:39,633][INFO ][http ] [node-1] publish_address {10.0.18.148:9200}, bound_addresses {10.0.18.148:9200}
[2016-08-26 17:56:39,633][INFO ][node ] [node-1] started
[2016-08-26 17:59:33,303][INFO ][cluster.service ] [node-1] detected_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-3}{lRKjIPpFSd-_NVn7-0-JeA}{10.0.18.150}{10.0.18.150:9300},{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300},}, reason: zen-disco-receive(from master [{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}])
|
能夠看到自動「選舉」node-2,即10.0.18.149爲master節點
第二臺:10.0.18.149
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
$
tail
-f logs
/serverlog
.log
……………………
[2016-08-26 17:58:20,854][WARN ][bootstrap ] unable to
install
syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled
in
[2016-08-26 17:58:21,480][INFO ][node ] [node-2] version[2.3.4], pid[1552], build[e455fd0
/2016-06-30T11
:24:31Z]
[2016-08-26 17:58:21,491][INFO ][node ] [node-2] initializing ...
[2016-08-26 17:58:22,537][INFO ][plugins ] [node-2] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-08-26 17:58:22,574][INFO ][
env
] [node-2] using [1] data paths, mounts [[/ (
/dev/mapper/vg_template-lv_root
)]], net usable_space [14.9gb], net total_space [17.1gb], spins? [possibly], types [ext4]
[2016-08-26 17:58:22,575][INFO ][
env
] [node-2] heap size [1015.6mb], compressed ordinary object pointers [
true
]
[2016-08-26 17:58:22,578][WARN ][
env
] [node-2] max
file
descriptors [4096]
for
elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:58:26,437][INFO ][node ] [node-2] initialized
[2016-08-26 17:58:26,440][INFO ][node ] [node-2] starting ...
[2016-08-26 17:58:26,783][INFO ][transport ] [node-2] publish_address {10.0.18.149:9300}, bound_addresses {10.0.18.149:9300}
[2016-08-26 17:58:26,815][INFO ][discovery ] [node-2] serverlog
/k0vpt0khTOG0Kmen8EepAg
[2016-08-26 17:58:56,838][WARN ][discovery ] [node-2] waited
for
30s and no initial state was
set
by the discovery
[2016-08-26 17:58:56,853][INFO ][http ] [node-2] publish_address {10.0.18.149:9200}, bound_addresses {10.0.18.149:9200}
[2016-08-26 17:58:56,854][INFO ][node ] [node-2] started
[2016-08-26 17:59:33,130][INFO ][cluster.service ] [node-2] new_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-1}{py6UOr4rRCCuK3KjA-Aj-Q}{10.0.18.148}{10.0.18.148:9300},{node-3}{lRKjIPpFSd-_NVn7-0-JeA}{10.0.18.150}{10.0.18.150:9300},}, reason: zen-disco-
join
(elected_as_master, [2] joins received)
[2016-08-26 17:59:33,686][INFO ][gateway ] [node-2] recovered [0] indices into cluster_state
|
也能夠看到自動「選舉」node-2,即10.0.18.149爲master節點
第三臺:10.0.18.150
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
$
tail
-f logs
/serverlog
.log
…………………………
[2016-08-26 17:59:25,644][INFO ][node ] [node-3] initializing ...
[2016-08-26 17:59:26,652][INFO ][plugins ] [node-3] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-08-26 17:59:26,689][INFO ][
env
] [node-3] using [1] data paths, mounts [[/ (
/dev/mapper/vg_template-lv_root
)]], net usable_space [14.9gb], net total_space [17.1gb], spins? [possibly], types [ext4]
[2016-08-26 17:59:26,689][INFO ][
env
] [node-3] heap size [1015.6mb], compressed ordinary object pointers [
true
]
[2016-08-26 17:59:26,693][WARN ][
env
] [node-3] max
file
descriptors [4096]
for
elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:59:30,398][INFO ][node ] [node-3] initialized
[2016-08-26 17:59:30,398][INFO ][node ] [node-3] starting ...
[2016-08-26 17:59:30,549][INFO ][transport ] [node-3] publish_address {10.0.18.150:9300}, bound_addresses {10.0.18.150:9300}
[2016-08-26 17:59:30,564][INFO ][discovery ] [node-3] serverlog
/lRKjIPpFSd-_NVn7-0-JeA
[2016-08-26 17:59:33,924][INFO ][cluster.service ] [node-3] detected_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-1}{py6UOr4rRCCuK3KjA-Aj-Q}{10.0.18.148}{10.0.18.148:9300},{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300},}, reason: zen-disco-receive(from master [{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}])
[2016-08-26 17:59:33,999][INFO ][http ] [node-3] publish_address {10.0.18.150:9200}, bound_addresses {10.0.18.150:9200}
[2016-08-26 17:59:34,000][INFO ][node ] [node-3] started
|
也是能夠看到自動「選舉」node-2,即10.0.18.149爲master節點!
二、其餘信息查看
查看健康信息:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
#curl -XGET 'http://10.0.18.148:9200/_cluster/health?pretty'
{
"cluster_name"
:
"serverlog"
,
"status"
:
"green"
,
"timed_out"
:
false
,
"number_of_nodes"
: 3,
"number_of_data_nodes"
: 3,
"active_primary_shards"
: 0,
"active_shards"
: 0,
"relocating_shards"
: 0,
"initializing_shards"
: 0,
"unassigned_shards"
: 0,
"delayed_unassigned_shards"
: 0,
"number_of_pending_tasks"
: 0,
"number_of_in_flight_fetch"
: 0,
"task_max_waiting_in_queue_millis"
: 0,
"active_shards_percent_as_number"
: 100.0
}
|
三、查看節點數
1
2
3
4
5
|
#curl -XGET 'http://10.0.18.148:9200/_cat/nodes?v'
host ip heap.percent
ram
.percent load node.role master name
10.0.18.148 10.0.18.148 7 51 0.00 d m node-1
10.0.18.150 10.0.18.150 5 50 0.00 d m node-3
10.0.18.149 10.0.18.149 7 51 0.00 d * node-2
|
注意:*表示當前master節點
四、查看節點分片的信息
1
2
|
#curl -XGET 'http://10.0.18.148:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
尚未看到分片的信息,後面會介紹緣由。
五、在三臺Elasticsearch節點上安裝插件,以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
#su - elasticsearch
$
cd
elasticsearch-2.3.4
$.
/bin/plugin
install
license
#license插件
-> Installing license...
Trying https:
//download
.elastic.co
/elasticsearch/release/org/elasticsearch/plugin/license/2
.3.4
/license-2
.3.4.zip ...
Downloading .......DONE
Verifying https:
//download
.elastic.co
/elasticsearch/release/org/elasticsearch/plugin/license/2
.3.4
/license-2
.3.4.zip checksums
if
available ...
Downloading .DONE
Installed license into
/home/elasticsearch/elasticsearch-2
.3.4
/plugins/license
$ .
/bin/plugin
install
marvel-agent
#marvel-agent插件
-> Installing marvel-agent...
Trying https:
//download
.elastic.co
/elasticsearch/release/org/elasticsearch/plugin/marvel-agent/2
.3.4
/marvel-agent-2
.3.4.zip ...
Downloading ..........DONE
Verifying https:
//download
.elastic.co
/elasticsearch/release/org/elasticsearch/plugin/marvel-agent/2
.3.4
/marvel-agent-2
.3.4.zip checksums
if
available ...
Downloading .DONE
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission setFactory
* javax.net.ssl.SSLPermission setHostnameVerifier
See http:
//docs
.oracle.com
/javase/8/docs/technotes/guides/security/permissions
.html
for
descriptions of what these permissions allow and the associated risks.
Continue with installation? [y
/N
]y
#輸入y,表示贊成安裝此插件
Installed marvel-agent into
/home/elasticsearch/elasticsearch-2
.3.4
/plugins/marvel-agent
$ .
/bin/plugin
install
mobz
/elasticsearch-head
#安裝head插件
-> Installing mobz
/elasticsearch-head
...
Trying https:
//github
.com
/mobz/elasticsearch-head/archive/master
.zip ...
Downloading ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https:
//github
.com
/mobz/elasticsearch-head/archive/master
.zip checksums
if
available ...
NOTE: Unable to verify checksum
for
downloaded plugin (unable to
find
.sha1 or .md5
file
to verify)
Installed
head
into
/home/elasticsearch/elasticsearch-2
.3.4
/plugins/head
安裝bigdesk插件
$
cd
plugins/
$
mkdir
bigdesk
$
cd
bigdesk
$git clone https:
//github
.com
/lukas-vlcek/bigdesk
_site
Initialized empty Git repository
in
/home/elasticsearch/elasticsearch-2
.3.4
/plugins/bigdesk/_site/
.git/
remote: Counting objects: 5016,
done
.
remote: Total 5016 (delta 0), reused 0 (delta 0), pack-reused 5016
Receiving objects: 100% (5016
/5016
), 17.80 MiB | 1.39 MiB
/s
,
done
.
Resolving deltas: 100% (1860
/1860
),
done
.
修改_site
/js/store/BigdeskStore
.js文件,大體在142行,以下:
return
(major == 1 && minor >= 0 && maintenance >= 0 && (build !=
'Beta1'
|| build !=
'Beta2'
));
修改成:
return
(major >= 1 && minor >= 0 && maintenance >= 0 && (build !=
'Beta1'
|| build !=
'Beta2'
));
添加插件的properties文件:
$
cat
>plugin-descriptor.properties<<EOF
description=bigdesk - Live charts and statistics
for
Elasticsearch cluster.
version=2.5.1
site=
true
name=bigdesk
EOF
安裝kopf插件
$.
/bin/plugin
install
lmenezes
/elasticsearch-kopf
-> Installing lmenezes
/elasticsearch-kopf
...
Trying https:
//github
.com
/lmenezes/elasticsearch-kopf/archive/master
.zip ...
Downloading ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https:
//github
.com
/lmenezes/elasticsearch-kopf/archive/master
.zip checksums
if
available ...
NOTE: Unable to verify checksum
for
downloaded plugin (unable to
find
.sha1 or .md5
file
to verify)
Installed kopf into
/home/elasticsearch/elasticsearch-2
.3.4
/plugins/kopf
|
查看安裝的插件,以下:
1
2
3
4
5
6
7
8
|
$
cd
elasticsearch-2.3.4
$ .
/bin/plugin
list
Installed plugins
in
/home/elasticsearch/elasticsearch-2
.3.4
/plugins
:
-
head
- license
- bigdesk
- marvel-agent
- kopf
|
說明:在10.0.18.150服務器上安裝kibana!
一、配置yum源
1
2
3
4
5
6
7
8
9
10
11
12
13
|
#vi /etc/yum.repos.d/kibana.repo
[kibana-4.5]
name=Kibana repository
for
4.5.x packages
baseurl=http:
//packages
.elastic.co
/kibana/4
.5
/centos
gpgcheck=1
gpgkey=http:
//packages
.elastic.co
/GPG-KEY-elasticsearch
enabled=1
安裝kibana
#yum install kibana -y
查看kibana
#rpm -qa kibana
kibana-4.5.4-1.x86_64
注:使用yum安裝的kibana是默認安裝到
/opt
目錄下的
|
二、安裝插件
1
2
3
4
5
6
7
8
9
10
|
#cd /opt/kibana/bin/kibana
#./kibana plugin --install elasticsearch/marvel/latest
Installing marvel
Attempting to transfer from https:
//download
.elastic.co
/elasticsearch/marvel/marvel-latest
.
tar
.gz
Transferring 2421607 bytes....................
Transfer complete
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
|
三、修改kibana配置文件
1
2
3
4
|
# vim /opt/kibana/config/kibana.yml #修改成下面3個參數
server.port: 5601
server.host:
"0.0.0.0"
elasticsearch.url:
"http://10.0.18.150:9200"
|
四、啓動kibana
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
#service kibana start
kibana started
查看進程
#ps -ef | grep kibana
kibana 2050 1 12 20:40 pts
/0
00:00:03
/opt/kibana/bin/
..
/node/bin/node
/opt/kibana/bin/
..
/src/cli
root 2075 1149 0 20:40 pts
/0
00:00:00
grep
kibana
設置開機自啓動
#chkconfig --add kibana
#chkconfig kibana on
查看啓動端口
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID
/Program
name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1025
/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1103
/master
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 2050
/node
#已經啓動成功
tcp 0 0 ::ffff:10.0.18.150:9300 :::* LISTEN 1547
/java
tcp 0 0 :::22 :::* LISTEN 1025
/sshd
tcp 0 0 ::1:25 :::* LISTEN 1103
/master
tcp 0 0 ::ffff:10.0.18.150:9200 :::* LISTEN 1547
/java
|
在瀏覽器訪問kibana端口並建立index,以下:
紅方框中的索引名稱是我在logstash server 服務器的配置文件中配置的index名稱,可是沒法建立,提示的信息Unable to fetch mapping…… 說明是Elasticsearch沒有讀取到這個index名稱,逐步排查,看日誌,最後在10.0.18.14四、10.0.18.145上查看logstash的日誌,報錯以下:
1
2
3
4
5
6
7
|
#tail logstash.log
{:timestamp=>
"2016-08-26T20:33:28.404000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
{:timestamp=>
"2016-08-26T20:38:29.110000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
{:timestamp=>
"2016-08-26T20:43:30.834000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
{:timestamp=>
"2016-08-26T20:48:31.559000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
{:timestamp=>
"2016-08-26T20:53:32.298000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
{:timestamp=>
"2016-08-26T20:58:33.028000+0800"
, :message=>
"failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log"
, :level=>:warn}
|
1
2
|
在兩臺nginx服務器操做
#chmod 755 /var/log/nginx/access.log
|
從新刷新kibana頁面,並建立index名爲nginx-log-*的索引,此次就能夠了,以下:
點擊綠色按鈕「Create」,就能夠建立成功了!而後查看kibana界面的「Discovery」,就會看到蒐集的nginx日誌了,以下:
能夠看到已經蒐集到日誌數據了!
五、訪問head,查看集羣是否一致,以下圖:
六、訪問bigdesk,查看信息,以下圖:
上圖中也標記了node-2爲master節點(有星星標記),上圖顯示的數據是不斷刷新的!
七、訪問kopf,查看信息,以下圖:
上面提到了查看節點分片的信息,結果是沒有數據(由於剛配置好,尚未建立索引,因此分片信息尚未),如今再測試一次,就能夠看到數據了,以下圖:
1
2
3
4
|
#curl -XGET '10.0.18.148:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green
open
.kibana 1 1 3 0 45.2kb 23.9kb
green
open
nginx-log-2016.08.26 5 1 222 0 549.7kb 272.4kb
|
八、在kibana界面能夠查看到nginx-log-*這個index蒐集到的nginx日誌數據,也能夠看到Elasticsearch集羣的index--marvel-es-1-*關於集羣的一些信息,以下圖:
一、關於kibana端口
配置過kibana的都知道kibana的默認端口是5601,我想修改成80,結果啓動kibana報錯,以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
#cat /var/log/kibana/kibana.stderr
FATAL { [Error: listen EACCES 0.0.0.0:80]
cause:
{ [Error: listen EACCES 0.0.0.0:80]
code:
'EACCES'
,
errno:
'EACCES'
,
syscall:
'listen'
,
address:
'0.0.0.0'
,
port: 80 },
isOperational:
true
,
code:
'EACCES'
,
errno:
'EACCES'
,
syscall:
'listen'
,
address:
'0.0.0.0'
,
port: 80 }
FATAL { [Error: listen EACCES 10.0.18.150:80]
cause:
{ [Error: listen EACCES 10.0.18.150:80]
code:
'EACCES'
,
errno:
'EACCES'
,
syscall:
'listen'
,
address:
'10.0.18.150'
,
port: 80 },
isOperational:
true
,
code:
'EACCES'
,
errno:
'EACCES'
,
syscall:
'listen'
,
address:
'10.0.18.150'
,
port: 80 }
#tail /var/log/kibana/kibana.stdout
{
"type"
:
"log"
,
"@timestamp"
:
"2016-08-29T02:54:21+00:00"
,
"tags"
:[
"fatal"
],
"pid"
:3217,
"level"
:
"fatal"
,
"message"
:
"listen EACCES 10.0.18.150:80"
,
"error"
:{
"message"
:
"listen EACCES 10.0.18.150:80"
,
"name"
:
"Error"
,
"stack"
:
"Error: listen EACCES 10.0.18.150:80\n at Object.exports._errnoException (util.js:873:11)\n at exports._exceptionWithHostPort (util.js:896:20)\n at Server._listen2 (net.js:1237:19)\n at listen (net.js:1286:10)\n at net.js:1395:9\n at nextTickCallbackWith3Args (node.js:453:9)\n at process._tickDomainCallback (node.js:400:17)"
,
"code"
:
"EACCES"
}}
|
沒有找到解決方法,只能將端口改成默認的5601了。
二、關於nginx日誌問題
本次實驗是使用yum安裝的nginx,版本是1.10.1,開始是由於nginx日誌權限,致使沒法讀取nginx日誌,後來將日誌文件權限修改成了755,就能夠了。可是,nginx日誌是天天進行logrotate的,新生成的日誌依然是640的權限,因此依然獲取不到日誌數據。因此只能修改nginx的默認logrotate文件了:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
#cd /etc/logrotate.d
#cat nginx #默認以下
/var/log/nginx/
*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 640 nginx adm
#能夠看到默認權限是640,屬主和屬組分別是nginx和adm
sharedscripts
postrotate
[ -f
/var/run/nginx
.pid ] &&
kill
-USR1 `
cat
/var/run/nginx
.pid`
endscript
}
修改後以下:
/var/log/nginx/
*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 755 nginx nginx
#修改成755,屬主和屬組都是nginx
sharedscripts
postrotate
[ -f
/var/run/nginx
.pid ] &&
kill
-USR1 `
cat
/var/run/nginx
.pid`
endscript
}
而後重啓nginx,之後在logrotage的日誌權限就是755了。
|
三、關於Marvel的問題
說明:Marvel是Elasticsearch集羣的monitor ,英文解釋以下:
Marvel is the best way to monitor your Elasticsearch cluster and provide actionable insights to help you get the most out of your cluster. It is free to use in both development and production.
Marvel是監控你的Elasticsearch集羣,並提供可操做的看法,以幫助您充分利用集羣的最佳方式,它是免費的在開發和生產中使用。
問題:Elasticsearch集羣都搭建好以後,在瀏覽器訪問Marvel,查看監控信息的時候頁面報錯,沒法顯示監控的信息大體意識是no-data之類的,後來經過排查三臺Elasticsearch的log,有一些錯誤,具體沒有搞清楚是什麼錯,因而重啓了三臺Elasticsearch的elasticsearch服務,再訪問Marvel的監控頁面,就OK了,以下圖:
能夠看到serverlog是我配置的集羣名稱,點進去繼續查看,以下圖:
四、節點分片信息相關的問題
在本次實驗的過程當中,第一次查看分片信息是沒有的,由於沒有建立索引,後面等建立過索引以後,就能夠看到建立的索引信息了,可是還有集羣的信息沒有顯示出來,問題應該和第2個同樣,Elasticsearch有問題,重啓以後,就查看到了以下:
1
2
3
4
5
6
7
8
9
|
查看節點分片信息:
#curl -XGET '10.0.18.148:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green
open
nginx-log-2016.08.29 5 1 2374 0 1.7mb 902.9kb
green
open
nginx-log-2016.08.27 5 1 2323 0 1mb 528.6kb
green
open
.marvel-es-data-1 1 1 5 3 17.6kb 8.8kb
green
open
.kibana 1 1 3 0 45.2kb 21.3kb
green
open
.marvel-es-1-2016.08.29 1 1 16666 108 12.1mb 6.1mb
green
open
nginx-log-2016.08.26 5 1 1430 0 800.4kb 397.8kb
|
五、關於建立多個index索引名稱,存儲不一樣類型日誌的狀況
也許咱們不止nginx這一種日誌須要蒐集分析,還有httpd、tomcat、mysql等日誌,可是若是都蒐集在nginx-log-*這個索引下面,會很亂,不易於排查問題,若是每一種類型的日誌都建立一個索引,這樣分類建立索引,會比較直觀,實現是在logstash server 服務器上建立多個conf文件,而後逐個啓動,以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
|
#cd /etc/logstash/conf.d/
#cat logstash_server.conf
input {
redis {
port =>
"6379"
host =>
"10.0.18.146"
data_type =>
"list"
key =>
"logstash-redis"
type
=>
"redis-input"
}
}
output {
elasticsearch {
hosts =>
"10.0.18.149"
index =>
"nginx-log-%{+YYYY.MM.dd}"
}
}
#cat logstash_server1.conf
input {
redis {
port =>
"6379"
host =>
"10.0.18.146"
data_type =>
"list"
key =>
"logstash-redisa"
type
=>
"redis-input"
}
}
output {
elasticsearch {
hosts =>
"10.0.18.149"
index =>
"httpd-log-%{+YYYY.MM.dd}"
}
}
若是還有其餘日誌,仿照上面的conf文件便可,不一樣的是index名稱和key
而後逐個啓動
#nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf &
#nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server1.conf &
再對應的日誌服務器(稱爲客戶端)自己配置conf文件,以下:
#cat /etc/logstash/conf.d/logstash-web.conf
input {
file
{
path => [
"/var/log/httpd/access_log"
]
type
=>
"httpd_log"
#type
start_position =>
"beginning"
}
}
output {
redis {
host =>
"10.0.18.146"
key =>
'logstash-redisa'
#key
data_type =>
'list'
}
}
而後啓動logstash服務,再到kibana界面建立新的索引httpd-log-*,就能夠在這個索引下面查看到蒐集到的httpd日誌了!
|
六、elasticsearch啓動以後,提示最大文件數過小的問題
ELK集羣搭建好以後,開啓elasticsearch,提示下面的warn:
1
2
3
4
|
[WARN ][
env
] [node-1] max
file
descriptors [65535]
for
elasticsearch process likely too low, consider increasing to at least [65536]
因而修改文件
/etc/security/limits
.conf ,添加以下:
* soft nofile 65536
* hard nofile 65536
|