ELKStack - 實戰篇(二)

1. kibana圖形化

  • 1 每一個es上都裝一個kibana
  • 2 kibana鏈接本身的es
  • 3 前端的Nginx作登陸驗證

1.1 添加圖形

添加一個統計nginx日誌狀態碼的餅圖。html

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

1.2 添加監控大盤

ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

2. logstash實戰 input插件syslog

https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-syslog.html前端

注意java

  • 1 syslog的插件性能很差,可是沒有個幾千臺是搞不掛它的。
  • 2 syslog默認是監聽514端口。用logstash的syslog模塊去監聽系統的514端口,就能夠得到日誌了。
  • 3 syslog模塊還有能夠用來採集交換機、路由器等的日誌。

2.1 配置系統的rsyslog.conf文件

[root@elk01-node2 ~]# vim /etc/rsyslog.conf 
# 修改成以下:
 90 *.* @@10.0.0.204:514

# 第一個* 日誌類型
# 第二個* 日誌級別

# 修改以後,重啓便可。
[root@elk01-node2 ~]# systemctl restart rsyslog

2.2 編寫配置文件收集系統日誌

2.2.1 先打印到前臺

[root@elk01-node2 conf.d]# cat rsyslog.conf 
input {
    syslog {
        type => "rsyslog"
        port => 514
    }
}

filter {
}

output {
    stdout {
        codec => "rubydebug"
    }
}

啓動到前臺,而後進行測試node

# 
[root@elk01-node2 conf.d]# /opt/logstash/bin/logstash -f ./rsyslog.conf 
Settings: Default pipeline workers: 4
Pipeline main started
{
           "message" => "[system] Activating service name='org.freedesktop.problems' (using servicehelper)\n",
          "@version" => "1",
        "@timestamp" => "2017-05-04T19:31:44.000Z",
              "type" => "rsyslog",
              "host" => "10.0.0.204",
          "priority" => 29,
         "timestamp" => "May  4 15:31:44",
         "logsource" => "elk01-node2",
           "program" => "dbus",
               "pid" => "900",
          "severity" => 5,
          "facility" => 3,
    "facility_label" => "system",
    "severity_label" => "Notice"
...........
...........

能夠經過logger 命令產生測試日誌。nginx

[root@elk01-node2 conf.d]# logger wangfei

.......
}
{
           "message" => "wangfei\n",
          "@version" => "1",
        "@timestamp" => "2017-05-04T19:43:19.000Z",
              "type" => "rsyslog",
              "host" => "10.0.0.204",
          "priority" => 13,
         "timestamp" => "May  4 15:43:19",
         "logsource" => "elk01-node2",
           "program" => "root",
          "severity" => 5,
          "facility" => 1,
    "facility_label" => "user-level",
    "severity_label" => "Notice"
}

2.2.2 收集到es

[root@elk01-node2 conf.d]# cat rsyslog.conf 
input {
    syslog {
        type => "rsyslog"
        port => 514
    }
}

filter {

}

output {
    #stdout {
    #    codec => "rubydebug"
    #}

    if [type] == "rsyslog" {
        elasticsearch {
            hosts => ["10.0.0.204:9200"]
            # 這裏經過月來創建索引,不能經過日,不然那樣會創建太多的索引,何況也不須要對系統日誌天天都創建一個索引。
            index => "rsyslog-%{+YYYY.MM}"  
        }
    }

}

ps:記得經過logger搞測試日誌,而後才能到es裏看到新建的索引web

ELKStack  - 實戰篇(二)

2.2.3 kibana添加索引

正則表達式

3. logstash實戰-input插件tcp

https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-tcp.htmlredis

適用場景apache

  • 1 當es對某個文件日誌部分沒有正常的收集到。
  • 2 客戶端沒有安裝logstash,或者不想安裝。
[root@elk01-node2 conf.d]# cat tcp.conf 
input {
    tcp {
         port => 6666 # 指定的端口,能夠本身定義。
         type => "tcp"  
         mode => "server"  # 模式,默認就是server
    }
}

filter {
}

output {
    stdout {
           codec => rubydebug
    }
}

用nc命令來將要發送的日誌發給logstash tcp服務端json

方法1 
[root@web01-node1 ~]# echo wf |nc 10.0.0.204 6666

方法2 經過文件輸入重定向
[root@web01-node1 ~]#  nc 10.0.0.204 6666 </etc/sysctl.conf 

方法3 經過僞設備
[root@web01-node1 ~]# echo "wangfei">/dev/tcp/10.0.0.204/6666

ELKStack  - 實戰篇(二)

4. logstash filter插件-grok

做用:對獲取到得日誌內容,進行字段拆分。
列如:nginx能夠將日誌寫成json格式,apache的日誌就不行,須要使用grok模塊來搞。

注意:

  • 1 grok是很是的影響性能的。
  • 2 不靈活。除非你懂ruby
  • 3 grok已經內置了不少的正則表達式,咱們直接調用就行。表達式位置:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/grok-patterns

https://www.elastic.co/guide/en/logstash/2.3/plugins-filters-grok.html

4.1 一個使用filter模塊的小例子

root@elk01-node2 conf.d]# cat grok.conf 
input {
    stdin {}
}

filter {
    grok {
        match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
    }
}

output{
    stdout {
        codec => rubydebug
    }
}

輸入以下測試日誌:

55.3.244.1 GET /index.html 15824 0.043

效果

{
       "message" => "55.3.244.1 GET /index.html 15824 0.043",
      "@version" => "1",
    "@timestamp" => "2017-05-04T23:59:15.836Z",
          "host" => "elk01-node2.damaiche.org-204",
        "client" => "55.3.244.1",
        "method" => "GET",
       "request" => "/index.html",
         "bytes" => "15824",
      "duration" => "0.043"
}
{
       "message" => "",
      "@version" => "1",
    "@timestamp" => "2017-05-04T23:59:16.000Z",
          "host" => "elk01-node2.damaiche.org-204",
          "tags" => [
        [0] "_grokparsefailure"
    ]
}

4.2 收集apache日誌

打印到前臺

[root@elk01-node2 conf.d]# cat grok.conf 
input {
    file {
         path => "/var/log/httpd/access_log"
         start_position => "beginning"
    }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
}

output{
    stdout {
        codec => rubydebug
    }
}

產生測試日誌

[root@elk01-node2 httpd]# ab -n 10 http://10.0.0.204:8081/

存到es裏

[root@elk01-node2 conf.d]# cat grok.conf 
input {
    file {
         path => "/var/log/httpd/access_log"
         start_position => "beginning"
    }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
}

output{
    #stdout {
    #    codec => rubydebug
    #}
    elasticsearch {
        hosts => ["10.0.0.204:9200"]
        index => "httpd-log-%{+YYYY.MM.dd}"
    }
}

啓動到前臺,而後產生測試日誌。

效果
ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

5 . 使用消息隊列擴展

https://www.elastic.co/guide/en/logstash/2.3/deploying-and-scaling.html

沒有消息隊列的架構,ex系統一掛,就玩玩。
數據-logstash-es
ELKStack  - 實戰篇(二)

解耦後架構。加上mq後的架構,es掛了,還有redis,數據啥的都不會丟失。
data - logstash - mq -logstash -es
ELKStack  - 實戰篇(二)

input的redis
https://www.elastic.co/guide/en/logstash/2.3/plugins-inputs-redis.html#plugins-inputs-redis
output的redis
https://www.elastic.co/guide/en/logstash/2.3/plugins-outputs-redis.html

說明

web01-node1.damaiche.org-203 logstash 收集數據
elk01-node2.damaiche.org-204 logstash(indexer) + kibana + es + redis

5.1 部署redis

機器:elk01-node2.damaiche.org-204

yum -y install redis

修改配置文件

vim /etc/redis.conf

 61 bind 10.0.0.204 127.0.0.1
128 daemonize yes

啓動redis

systemctl restart redis

測試redis是否能正常寫入

redis-cli -h 10.0.0.204 -p 6379

10.0.0.204:6379> set name hehe
OK
10.0.0.204:6379> get name
"hehe"

5.2 收集apache日誌

ELKStack  - 實戰篇(二)

5.2.1 收集日誌到redis

機器:web01-node1.damaiche.org-203

[root@web01-node1 conf.d]#cat redis.conf 
input {
stdin{}
    file {
         path => "/var/log/httpd/access_log"
         start_position => "beginning"
    }
}

output {
        redis {
                host => ['10.0.0.204']
                port => '6379'
                db => "6"
                key => "apache_access_log"
                data_type => "list"
        }
}

登陸redis查看

10.0.0.204:6379> info   # 查看redis詳細的信息
......
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db6:keys=2,expires=0,avg_ttl=0

10.0.0.204:6379> select 6 # 選擇key
OK

10.0.0.204:6379[6]> keys *  # 查看全部的key內容,生產環境禁止操做
1) "apache_access_log"
2) "demo"

10.0.0.204:6379[6]> type apache_access_log # 查看key的類型
list

10.0.0.204:6379[6]> llen apache_access_log # 查看key的長度
(integer) 10

10.0.0.204:6379[6]> lindex apache_access_log -1  # 查看最後一行的內容,有得時候程序有問題,想看最後一行的日誌信息,那麼就能夠這麼查看。
"{\"message\":\"10.0.0.204 - - [04/May/2017:21:07:17 -0400] \\\"GET / HTTP/1.0\\\" 403 4897 \\\"-\\\" \\\"ApacheBench/2.3\\\"\",\"@version\":\"1\",\"@timestamp\":\"2017-05-05T01:07:17.422Z\",\"path\":\"/var/log/httpd/access_log\",\"host\":\"elk01-node2.damaiche.org-204\"}"
10.0.0.204:6379[6]> 

啓動到前臺

5.2.2 啓動logstash從redis讀取數據

機器:elk01-node2.damaiche.org-204

測試可否正常從redis讀取數據(先不要寫到es)

root@elk01-node2 conf.d]# cat indexer.conf 
input {
        redis {
                host => ['10.0.0204']
                port => '6379'
                db => "6"
                key => "apache_access_log"
                data_type => "list"
        }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
}

output{
    stdout {
        codec => rubydebug
    }
}

造數據

[root@elk01-node2 conf.d]# ab -n 100 -c 1 http://10.0.0.203:8081/

此時到redis查看key access_apache_log的長度

10.0.0.204:6379[6]> llen apache_access_log
(integer) 100

將寫好的indexer.conf啓動到前臺,此時再查看redis apache_access_log的長度

10.0.0.204:6379[6]> llen apache_access_log
(integer) 0

長度爲0,說明數據從redis裏都取走了

ELKStack  - 實戰篇(二)

此時將數據寫入到es裏

[root@elk01-node2 conf.d]# cat indexer.conf 
input {
        redis {
                host => ['10.0.0204']
                port => '6379'
                db => "6"
                key => "apache_access_log"
                data_type => "list"
        }
}

filter {
   # 若是收集多個日誌文件,那麼這裏必定要有type來進行判斷,不然全部的日誌都會來進行匹配。grok原本就性能很差,這麼一搞,機器就死掉了。
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
}

output{
    elasticsearch {
        hosts => ["10.0.0.204:9200"]
        index => "apache-access-log-%{+YYYY.MM.dd}"
    }
}

效果
ELKStack  - 實戰篇(二)

ELKStack  - 實戰篇(二)

6. 項目實戰

6.1 需求分析

訪問日誌:apache訪問日誌,nginx訪問日誌,tomcat (file > filter)
錯誤日誌: java日誌。
系統日誌:/var/log/* syslog rsyslog
運行日誌:程序寫的(json格式)
網絡日誌:防火牆、交換機、路由器。。。

標準化:日誌是什麼格式的 json ?怎麼命名?存放在哪裏?/data/logs/? 日誌怎麼切割?按天?按小時?用什麼工具切割?腳本+crontab ?
工具化:如何使用logstash進行收集方案?

6.2 實戰

6.2.1 說明

收集日誌:nginx訪問日誌、apache訪問日誌、es日誌、系統message日誌

角色說明:
10.0.0.203 web01-node1.damaiche.org-203 logstash 收集數據
10.0.0.204 elk01-node2.damaiche.org-204 logstash(indexer) + kibana + es + redis

6.2.2 客戶端收集日誌,存放在redis上。

[root@web01-node1 conf.d]# cat shipper.conf
input{
    file {
        path => "/var/log/nginx/access.log"
        start_position => "beginning"
        type => "nginx-access01-log"
    }

    file {
        path => "/var/log/httpd/access_log"
        start_position => "beginning"
        type => "apache-access01-log"
    }

    file {
        path => "/var/log/elasticsearch/myes.log"
        start_position => "beginning"
        type => "myes01-log"
        codec => multiline {
            pattern => "^\["
            negate => "true"
            what => "previous"
       }
    }

    file {
        path => "/var/log/messages"
        start_position => "beginning"
        type => "messages01"
    }

}

output{
    if [type] == "nginx-access01-log" {
            redis {
                host => "10.0.0.204"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "nginx-access01-log"
            }
    }

    if [type] == "apache-access01-log" {
            redis {
                host => "10.0.0.204"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "apache-access01-log"
            }
    }

    if [type] == "myes01-log" {
            redis {
                host => "10.0.0.204"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "myes01-log"
            }
    }

    if [type] == "messages01" {
            redis {
                host => "10.0.0.204"
                port => "6379"
                db => "3"
                data_type => "list"
                key => "messages01"
            }
    }
}

[root@web01-node1 conf.d]# /etc/init.d/logstash start

注:

  • 1 啓動事後,需關注下logstash是否有異常輸出。最好仔細覈對下配置文件而後啓動,若有報錯,結合報錯日誌進行分析。ps:通常都是配置文件寫錯了,引發的報錯。
  • 2 記得造日誌啊。
  • 3 logstash是用rpm安裝的,默認的啓動用戶是logstash,收集某些日誌時,可能權限不夠沒法正常收集。
    有2種解決方法:
    • 1 將logstash用戶加入到root組.
    • 2.將logstsh用root用戶啓動(修改/etc/init.d/logstash 腳本,將LS_USER改爲root

登陸redis查看各個key是否正常生成,以及key的長度。

info
....
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db3:keys=4,expires=0,avg_ttl=0
db6:keys=5,expires=0,avg_ttl=0

10.0.0.204:6379> select 3
OK

10.0.0.204:6379[3]> keys *
1) "messages01"
2) "myes01-log"
3) "apache-access01-log"
4) "nginx-access01-log"

10.0.0.204:6379[3]> llen messages01
(integer) 20596

10.0.0.204:6379[3]> llen myes01-log
(integer) 2336

10.0.0.204:6379[3]> llen apache-access01-log
(integer) 92657

10.0.0.204:6379[3]> llen nginx-access01-log
(integer) 100820

6.2.3 es上啓動一個logstash從redis裏讀取數據。

[root@elk01-node2 conf.d]# cat indexer.conf
input {
    redis {
        host => "10.0.0.204"
        port => "6379"
        db => "3"
        data_type => "list"
        key => "nginx-access01-log"
    }

    redis {
        host => "10.0.0.204"
        port => "6379"
        db => "3"
        data_type => "list"
        key => "apache-access01-log"
    }

    redis {
        host => "10.0.0.204"
        port => "6379"
        db => "3"
        data_type => "list"
        key => "myes01-log"
    }

    redis {
        host => "10.0.0.204"
        port => "6379"
        db => "3"
        data_type => "list"
        key => "messages01"
    }
}

filter {
    if [type] == "apache-access01-log" {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}"}
        }
    }
}

output {
    if [type] == "nginx-access01-log" {
        elasticsearch {
            hosts => ["10.0.0.204:9200"]
            index => "nginx-access01-log-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "apache-access01-log" {
        elasticsearch {
            hosts => ["10.0.0.204:9200"]
            index => "apache-access01-log-%{+YYYY.MM.dd}"
        }
    }

    if [type] == "myes01-log" {
        elasticsearch {
            hosts => ["10.0.0.204:9200"]
            index => "myes01-log-%{+YYYY.MM.dd}"
        } 
    }

    if [type] == "messages01" {
        elasticsearch {
            hosts => ["10.0.0.204:9200"]
            index => "messages01-%{+YYYY.MM}"
        } 
    }
}

ELKStack  - 實戰篇(二)

若是使用redis list做爲elkstack的消息隊列,須要對全部的list key的長度進行監控。根據實際狀況,例如超過'10w'就報警。

相關文章
相關標籤/搜索