ELK日誌分析系統

概念介紹html

Elasticsearch前端

ElasticSearch是一個基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文搜索引擎,基於RESTful web接口。Elasticsearch是用Java開發的,並做爲Apache許可條款下的開放源碼發佈,是第二流行的企業搜索引擎。設計用於雲計算中,可以達到實時搜索,穩定,可靠,快速,安裝使用方便。
在elasticsearch中,全部節點的數據是均等的。
java

Elasticsearch官方地址node

Logstashmysql

  Logstash是一個徹底開源的工具,他能夠對你的日誌進行收集、分析,並將其存儲供之後使用(如,搜索),您可使用它。說到搜索,logstash帶有一個web界面,搜索和展現全部日誌。linux

logstash官方地址ios

Kibananginx

Kibana是一個基於瀏覽器頁面的Elasticsearch前端展現工具。Kibana所有使用HTML語言和Javascript編寫的。web

Kibana官方地址redis

elastic的部署配置文檔

配置文檔

https://www.elastic.co/guide/index.html

部署環境

系統:      Centos7.1

防火牆:    關閉

Sellinux:   關閉

主機名:    配置規範

主機:       兩臺

註明: 兩臺主機同時操做,安裝一下軟件。

(一)Elasticsearch

基礎環境安裝

1:下載並安裝GPG Key

[root@hadoop-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2:添加yum倉庫

[root@hadoop-node1 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3:安裝elasticsearch

[root@hadoop-node1 ~]# yum install -y elasticsearch

4:安裝相關測試軟件

#安裝Redis
yum install -y redis
#安裝Nginx
yum install -y nginx
#安裝java
yum install -y java

5:安裝完java後,檢測

[root@linux-node1 src]# java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

配置部署

1:配置修改配置文件

[root@linux-node1 ~]# mkdir -p /data/es-data
[root@linux-node1 src]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: caoxiaojian                # 組名(同一個組,組名必須一致)
node.name: linux-node1                   # 節點名稱,建議和主機名一致
path.data: /data/es-data                 # 數據存放的路徑
path.logs: /var/log/elasticsearch/       # 日誌存放的路徑
bootstrap.mlockall: true                 # 鎖住內存,不被使用到交換分區去
network.host: 0.0.0.0                    # 網絡設置
http.port: 9200                          # 端口

2:啓動並查看

[root@linux-node1 src]# chown  -R elasticsearch.elasticsearch /data/
[root@linux-node1 src]# systemctl  start elasticsearch
[root@linux-node1 src]# systemctl  status elasticsearch
 CGroup: /system.slice/elasticsearch.service
           └─3005 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSI...
##### 內存最小256m,最大1g
[root@linux-node1 src]# netstat -antlp |egrep "9200|9300"
tcp6       0      0 :::9200                 :::*                    LISTEN      3005/java           
tcp6       0      0 :::9300                 :::*                    LISTEN      3005/java

而後經過web訪問(個人IP是192.168.56.11)

http://192.168.56.11:9200/

3:經過命令的方式查看數據

[root@linux-node1 src]# curl -i -XGET 'http://192.168.56.11:9200/_count?pretty' -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95

{
  "count" : 0,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  }
}

有沒有以爲特別的不爽,要用命令來查看。

4:上大招,使用head插件來查看

4.1:安裝head插件

[root@linux-node1 src]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

4.2:使用head插件後,使用web插件

###插入數據實例:

Image

###查看數據實例

Image

###複合查詢實例

Image

5:監控節點

5.1安裝

[root@linux-node1 src]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

5.2查看

http://192.168.56.11:9200/_plugin/kopf/#!/cluster

Image

下面進行節點2的配置

註釋:其實兩個的安裝配置基本上是同樣的,不一樣的地方我會紅色標記。

# 配置文件的修改
[root@linux-node2 src]# mkdir -p /data/es-data   
[root@linux-node2 src]# vim /etc/elasticsearch/elasticsearch.yml 
[root@linux-node2 src]# grep "^[a-z]" /etc/elasticsearch/elasticsearch.yml  -n
17:cluster.name: caoxiaojian
23:node.name: linux-node2
33:path.data: /data/es-data
37:path.logs: /var/log/elasticsearch
43:bootstrap.mlockall: true
54:network.host: 0.0.0.0
58:http.port: 9200
79:discovery.zen.ping.multicast.enabled: false
80:discovery.zen.ping.unicast.hosts: ["192.168.56.11", "192.168.56.12"]
# 修改權限配置
[root@linux-node2 src]# chown -R elasticsearch.elasticsearch /data/
# 啓動服務
[root@linux-node2 src]# systemctl start elasticsearch
[root@linux-node2 src]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-01-13 00:42:19 CST; 5s ago
     Docs: http://www.elastic.co
  Process: 2926 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 2928 (java)
   CGroup: /system.slice/elasticsearch.service
           └─2928 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSI...

Jan 13 00:42:19 linux-node2.example.com systemd[1]: Starting Elasticsearch...
Jan 13 00:42:19 linux-node2.example.com systemd[1]: Started Elasticsearch.
# 查看端口
[root@linux-node2 src]# netstat -antlp|egrep "9200|9300"
tcp6       0      0 :::9200                 :::*                    LISTEN      2928/java           
tcp6       0      0 :::9300                 :::*                    LISTEN      2928/java           
tcp6       0      0 127.0.0.1:48200         127.0.0.1:9300          TIME_WAIT   -                   
tcp6       0      0 ::1:41892               ::1:9300                TIME_WAIT   -

添加了node2後,我們再來看head頁面

Image

本來只有linuxnode1節點,如今出現了node2。星號表示主節點。

在ELK中,它的主從節點沒有特殊的地方,他們的數據是相同的。

(二)Logstash

基礎環境安裝

1:下載並安裝GPG Key

[root@hadoop-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2:添加yum倉庫

[root@hadoop-node1 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3:安裝logstash

[root@hadoop-node1 ~]# yum install -y logstash

4:logstash啓動

[root@linux-node1 src]# systemctl start elasticsearch

數據的測試

1:基本的輸入輸出

[root@linux-node1 src]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
Settings: Default filter workers: 1
Logstash startup completed
hello                                                                                           #  輸入hello
2016-01-13T01:40:45.293Z linux-node1.example.com hello                                          #  輸出這個

2:使用rubydebug詳細輸出

[root@linux-node1 src]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
Settings: Default filter workers: 1
Logstash startup completed
hello                                                                                    # 輸入 hello
{
       "message" => "hello",                                                             # 輸入的信息
      "@version" => "1",                                                                  # 版本
    "@timestamp" => "2016-01-13T01:43:19.454Z",                                           # 時間
          "host" => "linux-node1.example.com"                                              # 存放在哪一個節點
}

3:把內容寫到elasticsearch中

[root@linux-node1 src]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.56.11:9200"]} }'
Settings: Default filter workers: 1
Logstash startup completed                                       # 輸入一下數據測試
123123
hehehehe
123he123he
qwert

使用rubydebug和寫到elasticsearch中的區別:其實就是後面標準輸出的區別。一個使用codec;一個使用elasticsearch

寫到elasticsearch中在logstash中查看

Image

Image

Image

4:即寫到elasticsearch中又寫在文件中一份

[root@linux-node1 src]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.56.11:9200"]} stdout{ codec => rubydebug}}'
Settings: Default filter workers: 1
Logstash startup completed
nishishui                                                                           # 輸入的內容
{
       "message" => "nishishui",
      "@version" => "1",
    "@timestamp" => "2016-01-13T02:22:35.330Z",
          "host" => "linux-node1.example.com"
}
bugaosuni                                                                           # 輸入的內容
{
       "message" => "bugaosuni",
      "@version" => "1",
    "@timestamp" => "2016-01-13T02:22:40.497Z",
          "host" => "linux-node1.example.com"
}

文本能夠長期保留、操做簡單、壓縮比大

Image

logstash的配置和文件的編寫

1:logstash的配置

簡單的配置方式

[root@linux-node1 src]# vim /etc/logstash/conf.d/01-logstash.conf
input { stdin { } }
output {
     elasticsearch { hosts => ["192.168.56.11:9200"]}
     stdout { codec => rubydebug }
}

它的執行:

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf

參考內容

https://www.elastic.co/guide/en/logstash/current/configuration.html
https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html

2:收集系統日誌

[root@linux-node1 ~]# vim  file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}

output {
    elasticsearch {
       hosts => ["192.168.56.11:9200"]
       index => "system-%{+YYYY.MM.dd}"
    }
}
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f file.conf 

Image

對於文件來講是一行行的收集的,可是對於logstash來講是事件

Image

參考內容

https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

3:收集java日誌

其中包含上面講到的日誌收集

[root@linux-node1 ~]# cat file.conf 
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}

input {
    file {
       path => "/var/log/elasticsearch/caoxiaojian.log"
       type => "es-error" 
       start_position => "beginning"
    }
}


output {

    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
}
 
 
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f file.conf 

若是你的日誌中有type字段 那你就不能在conf文件中使用type

不須要輸入任何內容,直接查看。

Image

參考內容

https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html

有個問題:
    每一個報錯都給收集成一行了,不是按照一個報錯,一個事件模塊收集的。

4:將行換成事件的方式展現

[root@linux-node1 ~]# vim multiline.conf
input {
    stdin {
       codec => multiline {
          pattern => "^\["
          negate => true
          what => "previous"
        }
    }
}
output {
    stdout {
      codec => "rubydebug"
     }  
}

執行命令

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f multiline.conf 
Settings: Default filter workers: 1
Logstash startup completed
123
456
[123
{
    "@timestamp" => "2016-01-13T06:17:18.542Z",
       "message" => "123\n456",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "linux-node1.example.com"
}
123]
[456]
{
    "@timestamp" => "2016-01-13T06:17:27.592Z",
       "message" => "[123\n123]",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "linux-node1.example.com"
}

在沒有遇到[的時候,系統不會收集,只有碰見[的時候,纔算是一個事件,才收集起來。

參考內容

https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

(三)Kibana

kibana的安裝

[root@hadoop-node1 src]#  cd /usr/local/src
[root@hadoop-node1 src]#  wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@hadoop-node1 src]#  tar zxf kibana-4.3.1-linux-x64.tar.gz
[root@hadoop-node1 src]#  mv kibana-4.3.1-linux-x64 /usr/local/
[root@hadoop-node1 src]#  ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana

修改配置文件

[root@linux-node1 config]# pwd
/usr/local/kibana/config
[root@linux-node1 config]# grep "^[a-z]" kibana.yml  -n
2:server.port: 5601
5:server.host: "0.0.0.0"
12:elasticsearch.url: "http://192.168.56.11:9200"
20:kibana.index: ".kibana"

由於他一直運行在前臺,要麼選擇開一個窗口,要麼選擇使用screen

安裝並使用screen啓動kibana

[root@linux-node1 ~]# yum -y install screen
[root@linux-node1 ~]# screen 
[root@linux-node1 ~]# /usr/local/kibana/bin/kibana 
  log   [14:42:44.057] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [14:42:44.081] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [14:42:44.083] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [14:42:44.084] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [14:42:44.095] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [14:42:44.103] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [14:42:44.108] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [14:42:44.124] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [14:42:44.136] [info][listening] Server running at http://0.0.0.0:5601
  log   [14:42:49.135] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found
  log   [14:42:51.800] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready

# 而後ctrl+a+d
[root@linux-node1 ~]# screen -ls
There is a screen on:
    7572.pts-1.linux-node1    (Detached)
1 Socket in /var/run/screen/S-root.

訪問方式:

http://192.168.56.11:5601/

Image

而後點擊上面的Discover

Image

1:收集nginx的訪問日誌

修改nginx的配置文件

##### http 標籤中
    log_format  json '{"@timestamp":"$time_iso8601",'
               '"@version":"1",'
               '"client":"$remote_addr",'
               '"url":"$uri",'
               '"status":"$status",'
               '"domain":"$host",'
               '"host":"$server_addr",'
               '"size":$body_bytes_sent,'
               '"responsetime":$request_time,'
               '"referer": "$http_referer",'
               '"ua": "$http_user_agent"'
               '}';
##### server標籤中
       access_log  /var/log/nginx/access_json.log  json;

啓動nginx服務

[root@linux-node1 ~]# systemctl start nginx
[root@linux-node1 ~]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-01-13 15:17:19 CST; 4s ago
  Process: 7630 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 7626 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 7625 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 7633 (nginx)
   CGroup: /system.slice/nginx.service
           ├─7633 nginx: master process /usr/sbin/nginx
           └─7634 nginx: worker process

[root@linux-node1 ~]# netstat -antlp |grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      7633/nginx: master  
tcp        0      0 192.168.56.11:55580     192.168.56.11:9200      TIME_WAIT   -                   
tcp6       0      0 :::80                   :::*                    LISTEN      7633/nginx: master

編寫收集文件

此次使用json的方式收集

[root@linux-node1 ~]# cat json.conf 
input {
   file {
      path => "/var/log/nginx/access_json.log"
      codec => "json"
   }
}

output {
   stdout {
      codec => "rubydebug"
   }
}

啓動日誌收集程序

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f json.conf

訪問nginx頁面

就會出現如下內容

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f json.conf 
Settings: Default filter workers: 1
Logstash startup completed
{
      "@timestamp" => "2016-01-13T07:29:48.000Z",
        "@version" => "1",
          "client" => "192.168.56.1",
             "url" => "/index.html",
          "status" => "304",
          "domain" => "192.168.56.11",
            "host" => "192.168.56.11",
            "size" => 0,
    "responsetime" => 0.0,
         "referer" => "-",
              "ua" => "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36",
            "path" => "/var/log/nginx/access_json.log"
}

在elasticsearch中查看

Image

將其彙總到總文件中

[root@linux-node1 ~]# cat file.conf 
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }

    file {
       path => "/var/log/elasticsearch/caoxiaojian.log"
       type => "es-error" 
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["
           negate => true
           what => "previous"
       }
    }
    file {
       path = "/var/log/nginx/access_json.log"
       codec = json
       start_position => "beginning"
       type => "nginx-log"
    }
}


output {

    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx-log"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "nignx-log-%{+YYYY.MM.dd}"
        }
    }
}

添加到kibana

這裏應該寫nginx-log*

Image

2:收集系統日誌

編寫收集文件並執行

[root@linux-node1 ~]# cat syslog.conf 
input {
    syslog {
        type => "system-syslog"
        host => "192.168.56.11"
        port => "514"
    }
}

output {
    stdout {
        codec => "rubydebug"
    }
}

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f syslog.conf

從新開啓一個窗口,查看服務是否啓動

[root@linux-node1 ~]# netstat -ntlp|grep 514
tcp6       0      0 192.168.56.11:514       :::*                    LISTEN      7832/java
[root@linux-node1 ~]# vim /etc/rsyslog.conf
*.* @@192.168.56.11:514
[root@linux-node1 ~]# systemctl  restart rsyslog

回到原來的窗口,就會出現數據

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f syslog.conf 
Settings: Default filter workers: 1
Logstash startup completed
{
           "message" => "[origin software=\"rsyslogd\" swVersion=\"7.4.7\" x-pid=\"7879\" x-info=\"http://www.rsyslog.com\"] start\n",
          "@version" => "1",
        "@timestamp" => "2016-01-13T08:14:53.000Z",
              "type" => "system-syslog",
              "host" => "192.168.56.11",
          "priority" => 46,
         "timestamp" => "Jan 13 16:14:53",
         "logsource" => "linux-node1",
           "program" => "rsyslogd",
          "severity" => 6,
          "facility" => 5,
    "facility_label" => "syslogd",
    "severity_label" => "Informational"
}

再次添加到總文件中

[root@linux-node1 ~]# cat file.conf 
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }

    file {
       path => "/var/log/elasticsearch/caoxiaojian.log"
       type => "es-error" 
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["
           negate => true
           what => "previous"
       }
    }
    file {
       path => "/var/log/nginx/access_json.log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
    syslog {
        type => "system-syslog"
        host => "192.168.56.11"
        port => "514"
    }

}


output {

    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx-log"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "nignx-log-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "system-syslog"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-syslog-%{+YYYY.MM.dd}"
        }
    }
}

執行總文件

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f file.conf

測試:

向日志中添加數據,看elasticsearch和kibana的變化

往系統日誌中添加點數據(再開一個窗口)
[root@linux-node1 ~]# logger "hehehehehehe1"
[root@linux-node1 ~]# logger "hehehehehehe2"
[root@linux-node1 ~]# logger "hehehehehehe3"
[root@linux-node1 ~]# logger "hehehehehehe4"
[root@linux-node1 ~]# logger "hehehehehehe5"

出現這個圖

Image

添加到kibana中

Image

Discover中查看

Image

3:TCP日誌的收集

(不添加到file中了,須要的話能夠在添加)

編寫收集文件,並啓動

[root@linux-node1 ~]# vim tcp.conf
input {
    tcp {
       host => "192.168.56.11"
       port => "6666"
    }
}
output {
    stdout {
       codec => "rubydebug"
    }
}
 
 
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf 

開啓另一個窗口測試

[root@linux-node1 ~]# netstat -ntlp|grep 6666
tcp6       0      0 192.168.56.11:6666      :::*                    LISTEN      7957/java
[root@linux-node1 ~]# nc 192.168.56.11 6666 </etc/resolv.conf

查看內容

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf 
Settings: Default filter workers: 1
Logstash startup completed
{
       "message" => "# Generated by NetworkManager",
      "@version" => "1",
    "@timestamp" => "2016-01-13T08:48:17.426Z",
          "host" => "192.168.56.11",
          "port" => 44721
}
{
       "message" => "search example.com",
      "@version" => "1",
    "@timestamp" => "2016-01-13T08:48:17.427Z",
          "host" => "192.168.56.11",
          "port" => 44721
}
{
       "message" => "nameserver 192.168.56.2",
      "@version" => "1",
    "@timestamp" => "2016-01-13T08:48:17.427Z",
          "host" => "192.168.56.11",
          "port" => 44721
}

測試二:

[root@linux-node1 ~]# echo "hehe" | nc 192.168.56.11 6666
[root@linux-node1 ~]# echo "hehe" > /dev/tcp/192.168.56.11/6666   # 僞設備,
在去查看,就會顯示出來
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf 
Settings: Default filter workers: 1
Logstash startup completed
{
       "message" => "hehe",
      "@version" => "1",
    "@timestamp" => "2016-01-13T08:56:19.635Z",
          "host" => "192.168.56.11",
          "port" => 45490
}
{
       "message" => "hehe",
      "@version" => "1",
    "@timestamp" => "2016-01-13T08:56:54.620Z",
          "host" => "192.168.56.11",
          "port" => 45543

4:使用filter

編寫文件

[root@linux-node1 ~]# cat grok.conf
input {
    stdin{}
}
filter {
  grok {
    match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
  }
}
output {
    stdout{
        codec => "rubydebug"
    }
}

執行檢測

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f grok.conf 
Settings: Default filter workers: 1
Logstash startup completed
55.3.244.1 GET /index.html 15824 0.043  #輸入這個,而後下面自動造成字典的方式
{
       "message" => "55.3.244.1 GET /index.html 15824 0.043",
      "@version" => "1",
    "@timestamp" => "2016-01-13T15:02:55.845Z",
          "host" => "linux-node1.example.com",
        "client" => "55.3.244.1",
        "method" => "GET",
       "request" => "/index.html",
         "bytes" => "15824",
      "duration" => "0.043"
}

其實上面使用的那些變量在程序中都有定義

[root@linux-node1 logstash-patterns-core-2.0.2]# cd /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns/
[root@linux-node1 patterns]# ls
aws     bro   firewalls      haproxy  junos         mcollective           mongodb  postgresql  redis
bacula  exim  grok-patterns  java     linux-syslog  mcollective-patterns  nagios   rails       ruby
[root@linux-node1 patterns]# cat grok-patterns
filter {
      # drop sleep events
    grok {
        match => { "message" =>"SELECT SLEEP" }
        add_tag => [ "sleep_drop" ]
        tag_on_failure => [] # prevent default _grokparsefailure tag on real records
      }
     if "sleep_drop" in [tags] {
        drop {}
     }
     grok {
        match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*" ]
      }
      date {
        match => [ "timestamp", "UNIX" ]
        remove_field => [ "timestamp" ]
      }
}

5:mysql慢查詢

收集文件

[root@linux-node1 ~]# cat mysql-slow.conf 
input {
    file {
        path => "/root/slow.log"
        type => "mysql-slowlog"
        codec => multiline {
            pattern => "^# User@Host"
            negate => true
            what => "previous"
        }
    }
}

filter {
      # drop sleep events
    grok {
        match => { "message" =>"SELECT SLEEP" }
        add_tag => [ "sleep_drop" ]
        tag_on_failure => [] # prevent default _grokparsefailure tag on real records
      }
     if "sleep_drop" in [tags] {
        drop {}
     }
     grok {
        match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*" ]
      }
      date {
        match => [ "timestamp", "UNIX" ]
        remove_field => [ "timestamp" ]
      }
}


output {
    stdout {
       codec =>"rubydebug"
    }
}

執行檢測.

##### 上面須要的slog.log是本身上傳的,而後本身插入數據保存後,會顯示 
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f mysql-slow.conf

接下來又碰見一個問題,一旦咱們的elasticsearch出現問題,就不能進行處理,那怎麼辦呢?能夠這樣啊,使用一箇中間件,先寫到中間件上,而後在從中間件中寫到ES中。不就完美的解決了嘛

(四)使用redis做爲中間件

1:redis的配置和啓動

[root@linux-node1 ~]# vim /etc/redis.conf
daemonize yes
bind 192.168.56.11
[root@linux-node1 ~]# systemctl start redis
[root@linux-node1 ~]# netstat -anltp|grep 6379
tcp        0      0 192.168.56.11:6379      0.0.0.0:*               LISTEN      8453/redis-server 1 
[root@linux-node1 ~]# redis-cli -h 192.168.56.11
192.168.56.11:6379> info

2:編寫從Client端收集數據的文件

[root@linux-node1 ~]# cat redis-out.conf 
input {
   stdin {}
}

output {
   redis {
      host => "192.168.56.11"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "demo"
   }
}

3:執行收集數據的文件,並輸入數據hello redis

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf 
Settings: Default filter workers: 1
Logstash startup completed
hello redis

4:在redis中查看數據

[root@linux-node1 ~]# redis-cli -h 192.168.56.11
192.168.56.11:6379> info
### 在最下面
# Keyspace
db6:keys=1,expires=0,avg_ttl=0
### 顯示是db6
192.168.56.11:6379> select 6
OK
192.168.56.11:6379[6]> keys *
1) "demo"
192.168.56.11:6379[6]> LINDEX demo -1
"{\"message\":\"hello redis\",\"@version\":\"1\",\"@timestamp\":\"2016-01-13T16:23:23.810Z\",\"host\":\"linux-node1.example.com\"}"

5:繼續隨便寫點數據

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf 
Settings: Default filter workers: 1
Logstash startup completed
hello redis
fa
fasd
fasdfwe
tgsdw
ds
f
ag
we
d

ggr
e
qer
gqer
grfdgdf
fdasvf
rwetgqer
gddgdfa
dfagag
4tq
qtrqfds
g3qgfd
fgfdsfgd
gqerngjh

6:在redis中查看

#### 在redis中查看長度
192.168.56.11:6379[6]> LLEN demo
(integer) 25

7:將redis中的內容寫到ES中

[root@linux-node1 ~]# cat redis-in.conf 
input { 
    redis {
      host => "192.168.56.11"
      port => "6379"
      db => "6"
      data_type => "list"
      key => "demo"
   }
}

output {
    elasticsearch {
      hosts => ["192.168.56.11:9200"]
      index => "redis-in-%{+YYYY.MM.dd}"
    }
}

8:在redis中查看,數據被讀出

192.168.56.11:6379[6]> LLEN demo
(integer) 25
192.168.56.11:6379[6]> LLEN demo
(integer) 24
192.168.56.11:6379[6]> LLEN demo
^[[A(integer) 11
192.168.56.11:6379[6]> 
192.168.56.11:6379[6]> LLEN demo
(integer) 0

Image

Image

9:收集全部的日誌到redis中

[root@linux-node1 ~]# cat shipper.conf 
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }

    file {
       path => "/var/log/elasticsearch/caoxiaojian.log"
       type => "es-error" 
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["
           negate => true
           what => "previous"
       }
    }
    file {
       path => "/var/log/nginx/access_json.log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
    syslog {
        type => "system-syslog"
        host => "192.168.56.11"
        port => "514"
    }

}


output {
   if [type] == "system"{
     redis {
        host => "192.168.56.11"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "system"
     }
   }

    if [type] == "es-error"{
      redis {
        host => "192.168.56.11"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "demo"
        }
     }
    if [type] == "nginx-log"{    
       redis {
          host => "192.168.56.11"
          port => "6379"
          db => "6"
          data_type => "list"
          key => "nginx-log"
       }
    }
    if [type] == "system-syslog"{
       redis {
          host => "192.168.56.11"
          port => "6379"
          db => "6"
          data_type => "list"
          key => "system-syslog"
       }    
     }
}

啓動後,在redis中查看

[root@linux-node1 ~]# /opt/logstash/bin/logstash -f shipper.conf 
Settings: Default filter workers: 1
Logstash startup completed

[root@linux-node1 ~]# redis-cli -h 192.168.56.11
192.168.56.11:6379> select 6
OK
192.168.56.11:6379[6]> keys *
1) "demo"
2) "system"
192.168.56.11:6379[6]> keys *
1) "nginx-log"
2) "demo"
3) "system"
192.168.56.11:6379[6]> keys *
1) "nginx-log"
2) "demo"
3) "system"

另開一個窗口,添加點日誌

[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"
[root@linux-node1 ~]# logger "12325423"

又會增長日誌

192.168.56.11:6379[6]> keys *
1) "system-syslog"
2) "nginx-log"
3) "demo"
4) "system"

其實能夠在任意的一臺ES中將數據從redis讀取到ES中,下面我們在node2節點,將數據從redis讀取到ES中

[root@linux-node2 ~]# cat file.conf 
input {
     redis {
        type => "system"
        host => "192.168.56.11"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "system"
     }

      redis {
        type => "es-error"
        host => "192.168.56.11"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "es-error"
        }
       redis {
          type => "nginx-log"
          host => "192.168.56.11"
          port => "6379"
          db => "6"
          data_type => "list"
          key => "nginx-log"
       }
       redis {
          type => "system-syslog"
          host => "192.168.56.11"
          port => "6379"
          db => "6"
          data_type => "list"
          key => "system-syslog"
       }    

}


output {

    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx-log"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "nignx-log-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "system-syslog"{
        elasticsearch {
           hosts => ["192.168.56.11:9200"]
           index => "system-syslog-%{+YYYY.MM.dd}"
        }
    }
}

檢查

192.168.56.11:6379[6]> keys *
1) "system-syslog"
2) "nginx-log"
3) "demo"
4) "system"
192.168.56.11:6379[6]> keys *
1) "demo"
192.168.56.11:6379[6]> del demo
(integer) 1
192.168.56.11:6379[6]> keys *
(empty list or set)

同時去kibana看 日誌都有了

能夠執行這個 去查看nginx日誌
[root@linux-node1 ~]# ab -n10000 -c1
http://192.168.56.11/

能夠起多個redis寫到ES中

相關文章
相關標籤/搜索