elk+redis處理nginx日誌管理系統配置

ELK簡介

ELKStack即Elasticsearch + Logstash + Kibana。日誌監控和分析在保障業務穩定運行時,起到了很重要的做用。好比對nginx日誌的監控分析,nginx是有日誌文件的,它的每一個請求的狀態等都有日誌文件進行記錄,因此能夠經過讀取日誌文件來分析;redis的list結構正好能夠做爲隊列使用,用來存儲logstash傳輸的日誌數據。而後elasticsearch就能夠進行分析和查詢了。html

本文搭建的的是一個分佈式的日誌收集和分析系統。logstash有agent和indexer兩個角色。對於agent角色,放在單獨的web機器上面,而後這個agent不斷地讀取nginx的日誌文件,每當它讀到新的日誌信息之後,就將日誌傳送到網絡上的一臺redis隊列上。對於隊列上的這些未處理的日誌,有不一樣的幾臺logstash indexer進行接收和分析。分析以後存儲到elasticsearch進行搜索分析。再由統一的kibana進行日誌web界面的展現[3]。java

目前我用兩臺機器作測試,hadoop-master安裝nginx和logstash agent(tar源碼包安裝),hadoop-slave機器安裝安裝logstash agent、elasticsearch、redis、nginx。
同時分析兩臺機器的nginx日誌,具體配置可參見說明文檔。如下記錄了ELK+redis來收集和分析日誌的配置過程,參考了官方文檔和前人的文章。node

 

系統環境

主機環境

1
2
hadoop-master	192.168.186.128 #logstash index、nginx
hadoop-slave 192.168.186.129 #安裝logstash agent、elasticsearch、redis、nginx

系統信息

1
2
3
4
5
6
7
[root@hadoop-slave ~]# java -version #Elasticsearch是java開發的,須要JDK環境,本機安裝JDK 1.8
java version "1.8.0_20"
Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
[root@hadoop-slave ~]# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m

Redis安裝

1
2
3
4
5
[root@hadoop-slave ~]# wget https://github.com/antirez/redis/archive/2.8.20.tar.gz
[root@hadoop-slave ~]# tar -zxf 2.8.20
[root@hadoop-slave ~]# mv redis-2.8.20/ /usr/local/src/
[root@hadoop-slave src]# cd redis-2.8.20/
[root@hadoop-slave src]# make

執行完後,會在當前目錄中的src目錄中生成相應的執行文件,如:redis-server redis-cli等;
咱們在/usr/local/目錄中建立redis位置目錄和相應的數據存儲目錄、配置文件目錄等.linux

1
2
3
4
5
6
[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv
[root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/
[root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/
[root@hadoop-slave redis-2.8.20]# cd src/
[root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/
`

 

到此Redis安裝完成了。
下面來試着啓動一下,並查看相應的端口是否已經啓動:nginx

1
2
3
4
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf & #能夠打入後臺
[root@hadoop-slave redis]# netstat -antulp | grep 6379
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 72669/redis-server
tcp 0 0 :::6379 :::* LISTEN 72669/redis-server

 

啓動沒問題了,ok!git

Elasticserach安裝

ElasticSearch默認的對外服務的HTTP端口是9200,節點間交互的TCP端口是9300,注意打開tcp端口。github

Elasticsearch安裝

官網下載最新版本的tar包
Search & Analyze in Real Time: Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.web

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@hadoop-slave ~]# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz
[root@hadoop-slave ~]# mkdir /usr/local/elk
[root@hadoop-slave ~]# tar zxf elasticsearch-1.7.1.tar.gz -C /usr/local/elk/
[root@hadoop-slave bin]# ln -s /usr/local/elk/elasticsearch-1.7.1/bin/elasticsearch /usr/bin
[root@hadoop-slave bin]# elasticsearch start
[2015-08-17 20:49:21,566][INFO ][node ] [Eliminator] version[1.7.1], pid[5828], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-17 20:49:21,585][INFO ][node ] [Eliminator] initializing ...
[2015-08-17 20:49:21,870][INFO ][plugins ] [Eliminator] loaded [], sites []
[2015-08-17 20:49:22,101][INFO ][env ] [Eliminator] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [27.9gb], net total_space [37.1gb], types [ext4]
[2015-08-17 20:50:08,097][INFO ][node ] [Eliminator] initialized
[2015-08-17 20:50:08,099][INFO ][node ] [Eliminator] starting ...
[2015-08-17 20:50:08,593][INFO ][transport ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.186.129:9300]}
[2015-08-17 20:50:08,764][INFO ][discovery ] [Eliminator] elasticsearch/XbpOYtsYQbO-6kwawxd7nQ
[2015-08-17 20:50:12,648][INFO ][cluster.service ] [Eliminator] new_master [Eliminator][XbpOYtsYQbO-6kwawxd7nQ][hadoop-slave][inet[/192.168.186.129:9300]], reason: zen-disco-join (elected_as_master)
[2015-08-17 20:50:12,683][INFO ][http ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.186.129:9200]}
[2015-08-17 20:50:12,683][INFO ][node ] [Eliminator] started
[2015-08-17 20:50:12,771][INFO ][gateway ] [Eliminator] recovered [0] indices into cluster_state
#能夠用` -d`參數打入後臺運行`elasticsearch start -d`
`

 

測試

出現200返回碼錶示okredis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@hadoop-slave ~]# elasticsearch start -d
[root@hadoop-slave ~]# curl -X GET http://localhost:9200
{
"status" : 200,
"name" : "Wasp",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}

 

Logstash安裝

Logstash is a flexible, open source, data collection, enrichment, and transport pipeline designed to efficiently process a growing list of log, event, and unstructured data sources for distribution into a variety of outputs, including Elasticsearch.
Logstash默認的對外端口是9292,若是防火牆開啓了要打開tcp端口。centos

源碼安裝

192.168.186.128主機源碼安裝,解壓到/usr/local/目錄下

1
2
3
[root@hadoop-master ~]# wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.tar.gz
[root@hadoop-master ~]# tar -zxf logstash-1.5.3.tar.gz -C /usr/local/
[root@hadoop-master logstash-1.5.3]# mkdir /usr/local/logstash-1.5.3/etc

 

yum安裝

192.168.186.129採用yum安裝

1
2
3
4
5
6
7
8
9
[root@hadoop-slave ~]# rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch #download public key
[root@hadoop-slave ~]# vi /etc/yum.repos.d/CentOS-Base.repo
[logstash-1.5]
name=Logstash repository for 1.5.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
[root@hadoop-slave ~]# yum install logstash #yum安裝會安裝在/opt目錄下

 

測試

1
2
3
4
[root@hadoop-slave ~]# cd /opt/logstash/
[root@hadoop-slave logstash]# ls
bin CHANGELOG.md CONTRIBUTORS Gemfile Gemfile.jruby-1.9.lock lib LICENSE NOTICE.TXT vendor
[root@hadoop-slave logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'

而後你會發現終端在等待你的輸入。沒問題,敲入 Hello World,回車,而後看看會返回什麼結果!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@hadoop-slave logstash]# vi logstash-simple.conf #sleasticsearch的host是本機
input { stdin { } }
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
[root@hadoop-slave logstash]# ./bin/logstash -f logstash-simple.conf #能夠打入後臺運行
……
{
"message" => "",
"@version" => "1",
"@timestamp" => "2015-08-18T06:26:19.348Z",
"host" => "hadoop-slave"
}
……

 

代表elasticsearch已經收到logstash傳來的數據了,通訊ok!
也能夠經過下面的方式

1
2
[root@hadoop-slave etc]# curl 'http://192.168.186.129:9200/_search?pretty'
#出現一堆數據表示ok!

 

logstash配置

logstash語法

摘錄自說明文檔
Logstash 社區一般習慣用 shipper,broker 和 indexer 來描述數據流中不一樣進程各自的角色。以下圖:

broker通常選擇redis。不過我見過不少運用場景裏都沒有用 logstash 做爲 shipper(也是agent的概念),或者說沒有用 elasticsearch 做爲數據存儲也就是說也沒有 indexer。因此,咱們其實不須要這些概念。只須要學好怎麼使用和配置 logstash 進程,而後把它運用到你的日誌管理架構中最合適它的位置就夠了。

設置nginx日誌格式

兩臺機器都安裝了nginx,因此都要修改nginx.conf,設置日誌格式。

1
2
3
4
5
6
7
[root@hadoop-master ~]# cd /usr/local/nginx/conf/
[root@hadoop-master conf]# vi nginx.conf #設置log_format,去掉註釋
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/host.access.log main; #設置access日誌,有訪問時自動寫入此文件
[root@hadoop-master conf]# nginx -s reload

 

hadoop-slave機器同上操做

開啓logstash agent

logstash agent負責收集信息傳送到redis隊列上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@hadoop-master ~]# cd /usr/local/logstash-1.5.3/
[root@hadoop-master logstash-1.5.3]# mkdir etc
[root@hadoop-master etc]# vi logstash_agent.conf
input {
file {
type => "nginx access log"
path => ["/usr/local/nginx/logs/host.access.log"]
}
}
output {
redis {
host => "192.168.186.129" #redis server
data_type => "list"
key => "logstash:redis"
}
}
[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &
#在另外一臺機器上的logstash_agent也一樣配置

 

開啓logstash indexer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@hadoop-slave conf]# cd /opt/logstash/
[root@hadoop-slave logstash]# cd etc/
[root@hadoop-slave etc]# vi logstash_indexer.conf
input {
redis {
host => "192.168.186.129"
data_type => "list"
key => "logstash:redis"
type => "redis-input"
}
}
filter {
grok {
type => "nginx_access"
match => [
"message", "%{IPORHOST:http_host} %{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:http_status_code} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{NUMBER:time_duration:float} %{NUMBER:time_backend_response:float}",
"message", "%{IPORHOST:http_host} %{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:http_status_code} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{NUMBER:time_duration:float}"
]
}
}
output {
elasticsearch {
embedded => false
protocol => "http"
host => "localhost"
port => "9200"
}
}
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &

配置完成!

Kibana安裝

Explore and Visualize Your Data: Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics that can be combined into custom dashboards that help you share insights from your data far and wide.

1
2
3
4
5
6
[root@hadoop-slave ~]# wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
[root@hadoop-slave elk]# tar -zxf kibana-4.1.1-linux-x64.tar.gz
[root@hadoop-slave elk]# mv kibana-4.1.1-linux-x64 /usr/local/elk
[root@hadoop-slave bin]# pwd
/usr/local/elk/kibana/bin
[root@hadoop-slave bin]# ./kibana &

打開http://192.168.186.129:5601/
若是須要遠程訪問,須要打開iptables的tcp的5601端口。

ELK+redis測試

若是ELK+redis都沒啓動,如下命令啓動:

1
2
3
4
5
6
7
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf  & #啓動redis
[root@hadoop-slave ~]# elasticsearch start -d #啓動elasticsearch
[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_agent.conf &
[root@hadoop-slave bin]# ./kibana & #啓動kibana
`

 

打開http://192.168.186.129/ 和 http://192.168.186.128/
每刷新一次頁面會產生一條訪問記錄,記錄在host.access.log文件中。

1
2
3
4
5
6
7
8
[root@hadoop-master logs]# cat host.access.log 
……
192.168.186.1 - - [18/Aug/2015:22:59:00 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:00:21 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:06:38 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:15:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:16:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
[root@hadoop-master logs]#

 

打開kibana頁面便可顯示兩臺機器nginx的訪問日誌信息,顯示時間是因爲虛擬機的時區和物理機時區不一致,不影響。

此時訪問出現以下界面

 

 

 

 

後記:

 本身在安裝過程當中遇到的問題,

  1.啓動elasticsearch  和 logstash 的時候須要使用jdk,最好是1.8版本

  2.elasticsearch有版本的問題。在本教程中使用的1.7版本,啓動使用 ./bin/elasticsearch start ,而且默認使用內存1G,高點的版本啓動使用./bin/elasticsearch 默認使用2G內存 

  3.啓動elasticsearch 的時候若是報jvm內存溢出,就要修改elasticsearch的內存,1.7版本是使用命令  ./bin/elasticsearch -Xmx70m -Xms70m 。測試5.0版本是 $Elasticsearch_HOME/conf/jvm.options 文件中修改  

            -Xms512m
            -Xmx512m

  4.配置logstash 配置input  和 output 的時候 版本不用格式有所變化

      1.5.3 版本 output寫法     

elasticsearch {
                embedded => false
                protocol => "http"
                host => "localhost"
                port => "9200"
        }

  2.1.0版本 output寫法:

elasticsearch {
                hosts => ["localhost:9200"]
        }
相關文章
相關標籤/搜索