參考html
http://udn.yyuap.com/doc/logstash-best-practice-cn/input/file.html
https://es.xiaoleilu.com/
https://www.cnblogs.com/kevingrace/p/5919021.html
1.運維要不停的查看各類日誌。
2.故障已經發生了纔看日誌(時間問題。)
3.節點多,日誌分散,收集日誌成了問題。
4.運行日誌,錯誤等日誌等,沒有規範目錄,收集困難。java
1.開發人員不能登錄線上服務器查看詳細日誌。
2.各個系統都有日誌,日誌數據分散難以查找。
3.日誌數據量大,查詢速度慢,數據不夠實時。node
1.收集(Logstash)
2.存儲(Elasticsearch、Redis、Kafka)
3.搜索+統計+展現(Kibana)
4.報警,數據分析(Zabbix)mysql
對於日誌來講,最多見的需求就是收集、存儲、查詢、展現,開源社區正好有相對應的開源項目:logstash(收集)、elasticsearch(存儲+搜索)、kibana(展現),咱們將這三個組合起來的技術稱之爲ELKStack,因此說ELKStack指的是Elasticsearch、Logstash、Kibana技術棧的結合,linux
[root@10 ~]# cat /etc/issue CentOS release 6.7 (Final)
[root@10 ~]# uname -a
Linux 10.0.0.10 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linuxnginx
查看CentOS自帶JDK是否已安裝redis
[root@10 ~]# yum list installed |grep java
java-1.6.0-openjdk.x86_64
java-1.6.0-openjdk-devel.x86_64
java-1.7.0-openjdk.x86_64
java-1.7.0-openjdk-devel.x86_64
tzdata-java.noarch 2015e-1.el6 @anaconda-CentOS-201508042137.x86_64sql
如有自帶安裝的JDK,卸載CentOS系統自帶Java環境json
[root@node1 ~]# yum -y remove java-1.7.0-openjdk*
[root@node1 ~]# yum -y remove java-1.6.0-openjdk*
卸載tzdata-javabootstrap
[root@10 ~]# yum -y remove tzdata-java.noarch
上傳java rpm包,下載地址
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
解壓
[root@10 ~]# rpm -ivh jdk-8u131-linux-x64.rpm
Preparing... ########################################### [100%]
1:jdk1.8.0_131 ########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
進入配置文件
[root@10 ~]# vim /etc/profile
將如下文件添加到尾行
JAVA_HOME=/usr/java/jdk1.8.0_131 PATH=$JAVA_HOME/bin:$PATH CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar export PATH JAVA_HOME CLASSPATH
更新 profile 文件
[root@10 ~]# source /etc/profile
查看Java版本信息
[root@10 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
下載 elasticsearch 安裝包並解壓
[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/elasticsearch-2.x/elasticsearch-2.4.5.rpm
--2017-04-08 00:35:08-- https://mirror.tuna.tsinghua.edu.cn/ELK/yum/elasticsearch-2.x/elasticsearch-2.4.5.rpm
Resolving mirror.tuna.tsinghua.edu.cn... 101.6.6.178, 2402:f000:1:416:101:6:6:178
Connecting to mirror.tuna.tsinghua.edu.cn|101.6.6.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27238255 (26M) [application/x-redhat-package-manager]
Saving to: 「elasticsearch-2.4.5.rpm」
100%[==================================================================================>] 27,238,255 185K/s in 2m 22s
2017-04-08 00:37:31 (187 KB/s) - 「elasticsearch-2.4.5.rpm」 saved [27238255/27238255]
[root@10 ~]# rpm -ivh elasticsearch-2.4.5.rpm
warning: elasticsearch-2.4.5.rpm: Header V4 RSA/SHA1 Signature, key ID d88e42b4: NOKEY
Preparing... ########################################### [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
1:elasticsearch ########################################### [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig
sudo chkconfig --add elasticsearch
### You can start elasticsearch service by executing
sudo service elasticsearch start
進入elasticsearch配置目錄
[root@10 ~]# cd /etc/elasticsearch/
配置 elasticsearch.yml 文件
[root@10 elasticsearch]# vim elasticsearch.yml
17 cluster.name: oldboy #集羣名稱,重要 23 node.name: linux-node1 #節點名稱 33 path.data: /data/es-data #建立此目錄並受權 elasticsearch 37 path.logs: /var/log/elasticsearch/ #日誌目錄 43 bootstrap.memory_lock: true #打開內存 54 network.host: 0.0.0.0 #網絡 58 http.port: 9200 #端口 68 discovery.zen.ping.unicast.hosts: ["其餘主機IP地址"] #設置單播地址,集羣使用
查看是否有/data/es-data目錄
[root@10 ~]# find /data/es-data/
若是沒有就建立
[root@10 ~]# mkdir -p /data/es-data
修改權限
[root@10 ~]# chown -R elasticsearch:elasticsearch /data/es-data
啓動
[root@10 ~]# /etc/init.d/elasticsearch start
Starting elasticsearch: [OK]
加入開機啓動
chkconfig elasticsearch on
如沒法啓動查看日誌
[root@10 ~]# tail -f /var/log/elasticsearch/oldboy.log
查看9200端口是否存在
[root@10 ~]# netstat -lntp
經過IP加端口號訪問
http://10.0.0.10:9200
下載head插件
[root@10 ~] /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
訪問地址
http://10.0.0.10:9200/_plugin/head/
bigdesk 和 kopf 功能同樣
下載地址
連接:http://pan.baidu.com/s/1nvzCBOH 密碼:9dag
解壓
將其解壓到 /usr/share/elasticsearch/plugins 文件下
訪問地址
http://10.0.0.10:9200/_plugin/bigdesk-master/
也能夠將 bigdesk-master 目錄更名爲 bigdesk 目錄方便訪問
下載安裝
/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
而後在本地瀏覽器中輸入:http://10.0.0.10:9200/_plugin/kopf
下載並解壓安裝包
[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/logstash-2.4/logstash-2.4.1.noarch.rpm
[root@10 ~]# rpm -ivh logstash-2.4.1.noarch.rpm
啓動
[root@10 ~]# /etc/init.d/logstash start
加入開機啓動
[root@10 ~]# chkconfig logstash on
標準輸出
[root@10 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
詳細輸出
[root@10 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
將輸出到elasticsearch
中
[root@10 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.0.0.10:9200"] } }'
將詳細信息輸出到elasticsearch
中
[root@10 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.0.0.10:9200"] } stdout{ codec => rubydebug} }'
將 /var/log/messages
輸出到 elasticsearch
vim all.conf
輸入如下內容
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["10.0.0.10:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
啓動文件
[root@10 ~]# /opt/logstash/bin/logstash -f all.conf
收集Java日誌,並將報錯收集爲一個事件
input {
file {
path => ["/var/log/messages", "/var/log/secure" ]
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/elasticsearch/elasticsearch.log"
type => "es-error"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => "10.0.0.10"
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => "10.0.0.10"
index => "es-error-%{+YYYY.MM.dd}"
}
}
}
配置nginx.conf文件
[root@10 ~]# /vim /etc/nginx/nginx.conf
將如下代碼插入到http端
log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domain":"$host",' '"host":"$server_addr",' '"size":$body_bytes_sent,' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"forwarded":"$http_x_forwarded_for",' '"ua":"$http_user_agent"' '}';
將如下代碼添加到server端,將原有的代碼註釋掉
#access_log logs/host.access.log main; access_log /var/log/nginx/access_json.log json;
建立腳本json.conf 文件並添加如下內容
input { file { path => "/var/log/nginx/access_json.log" codec => "json" } } output { stdout { codec => "rubydebug" } }
運行json文件
[root@10 ~]# /opt/logstash/bin/logstash -f json.conf
將json.conf文件添加到all.conf 文件中
input {
file {
path => ["/var/log/messages", "/var/log/secure" ]
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/nginx/access_json.log"
codec => "json"
start_position => "beginning"
type => "nginx-log"
}
file {
path => "/var/log/elasticsearch/elasticsearch.log"
type => "es-error"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => "10.0.0.10"
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => "10.0.0.10"
index => "es-error-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => "10.0.0.10"
index => "nginx-log-%{+YYYY.MM.dd}"
}
}
}
運行all.conf 文件
[root@10 ~]# /opt/logstash/bin/logstash -f all.conf
收集nginx日誌時收集時間可能會稍長,須要稍微等待
建立syslog.conf文件並寫入如下內容
[root@10 ~]# vim syslog.conf
input { syslog { type => "system-syslog" host => "10.0.0.10" port => "514" } } output { stdout { codec => "rubydebug" } }
運行syslog.conf文件
[root@10 ~]# /opt/logstash/bin/logstash -f syslog.conf
修改rsyslog.conf文件 79行
[root@10 ~]# vim /etc/rsyslog.conf #*.* @@remote-host:514 改成 *.* @@10.0.0.11:514
重啓rsyslog.conf文件
[root@10 ~]# /etc/init.d/rsyslog restart
查看是否有輸出
將syslog.conf文件添加到all.conf文件中
input {
syslog {
type => "system-syslog"
host => "10.0.0.12"
port => "514"
}
file {
path => ["/var/log/messages", "/var/log/secure" ]
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/nginx/access_json.log"
codec => "json"
start_position => "beginning"
type => "nginx-log"
}
file {
path => "/var/log/elasticsearch/elasticsearch.log"
type => "es-error"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => "10.0.0.12"
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => "10.0.0.12"
index => "es-error-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => "10.0.0.12"
index => "nginx-log-%{+YYYY.MM.dd}"
}
}
if [type] == "system-syslog" {
elasticsearch {
hosts => "10.0.0.12"
index => "system-syslog-%{+YYYY.MM.dd}"
}
}
}
收集 mysql-slowlog
input { #stdin { file { type => "mysql-slowlog" path => "/root/master-slow.log" start_position => "beginning" codec => multiline { pattern => "^# User@Host:" negate => true what => previous } } } filter { # drop sleep events grok { match => { "message" => "SELECT SLEEP" } add_tag => [ "sleep_drop" ] tag_on_failure => [] # prevent default _grokparsefailure tag on real records } if "sleep_drop" in [tags] { drop {} } grok { match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n# Time:.*$" ] #match => [ "message", "# User@Host:\s+%{WORD:user1}\[%{WORD:user2}\]\s+@\s+\[(?:%{IP:clientip})?\]\s+#\s+Thread_id:\s+%{NUMBER:thread_id:int}\s+Schema:\s+%{WORD:schema}\s+QC_hit:\s+%{WORD:qc_hit}\s+#\s+Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int}\s+#\s+Rows_affected:\s+%{NUMBER:rows_affected:int}\s+SET\s+timestamp=%{NUMBER:timestamp};\s+(?<query>(?<action>\w+)\s+.*);"] } date { match => [ "timestamp", "UNIX" ] remove_field => [ "timestamp" ] } } output { elasticsearch { hosts => ["10.0.0.10:9200"] index=>"mysql-slow-log-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } }
輸入到redis中
input { stdin{} } output { redis { host => "10.0.0.12" port => "6379" db => "6" data_type => "list" key => "demo" } }
輸入到elasticsearch中
input { redis { host => "10.0.0.12" port => "6379" db => "6" data_type => "list" key => "demo" batch_count => 1 } } output { elasticsearch { hosts => ["10.0.0.12:9200"] index => "redis-demo-%{+YYYY.MM.dd}" } }
開本來可視化平臺
安裝
[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/kibana-4.6/kibana-4.6.3-x86_64.rpm [root@10 ~]# rpm -ivh kibana-4.6.3-x86_64.rpm
編輯配置文件
[root@10 ~]#vim /opt/kibana/config/kibana.yml server.port: 5601 #端口 server.host: "0.0.0.0" #主機 elasticsearch.url: "http://10.0.0.10:9200" # elasticsearch地址 kibana.index: ".kibana"
啓動並加入開機啓動
[root@10 ~]# /etc/init.d/kibana start] chkconfig kibana on
訪問地址
http://10.0.0.10:5601
1.日誌分類
系統日誌: rsyslog logstash syslog插件
訪問日誌: nginx logstash codec json
錯誤日誌: file logstash file+mulitline
運行日誌: file logstash codec json
設備日誌: syslog logstash syslog插件
debug日誌: file logstash json or mulitline
日誌標準化路徑 固定格式 儘可能json 系統日誌開始 -> 錯誤日誌 -> 運行日誌 -> 訪問日誌