ELK使用3-Logstash

1、命令行輸入輸出操做html

  一、命令行輸出:前端

    /application/elk/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'java

  說明:nginx

    a、stdin{}[標準輸入] 正則表達式

    b、stdout{}[標準輸出]redis

  二、以json格式展現,在logstash中等號用 => 表示docker

    /application/elk/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'apache

  三、輸出到esjson

    a、要使用按照自定義方式根據當前時間生成索引的方式來輸入到es必須開啓 manage_template => true此參數,如使用logstash默認的logstash-%{+YYYY.MM.dd} 則能夠不用打開此參數,這個問題困擾了一下午。緩存

      /application/elk/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.30.41:9200"] index => "wohaoshuai-%{+YYYY.MM.dd}"}  manage_template => true}'

    若是不使用manage_template => true參數會報錯以下:

      [406] {"error":"Content-Type header [text/plain; charset=ISO-8859-1] is not supported","status":406} {:class=>"Elasticsearch::Transport::Transport::Errors::NotAcceptable", :level=>:error}

     b、若是隻是本身命名的index則不須要添加manage_template參數。

      /application/elk/logstash/bin/logstash -e 'input { stdin{ } } output { elasticsearch { hosts => ["192.168.30.41:9200"] index => "wohaoshuaitest"} }'

  四、既輸出到es又輸出到屏幕:

    /application/elk/logstash/bin/logstash -e 'input { stdin{ } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.30.41:9200"] index => "wohaoshuaitest"} }'

  五、要刪除後從新生成index收集須要刪除相應的記錄

    rm -rf /application/elk/logstash/data/plugins/inputs/file/.sincedb_*

  六、nginx日誌格式設置:

    log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}'; 

  七、filter

    a、grok:對咱們收進來的事件進行過濾。

      利用正則表達式進行匹配進行字段的拆分,所以grok提供了一下預約義的正則表達式,logstash 5.6.1相應的文件在路徑 /application/elk/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns下

      簡單的grok案例:   

下面匹配的內容爲:55.3.244.1 GET /index.html 15824 0.043
input {
  file {
    path => "/var/log/http.log" } } filter { grok { match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" } #這一行的意思是,將消息按照logstash提供的正則字段匹配,而後將匹配的內容的字段命名爲冒號後面自定義的名字 } }

    b、收集http日誌,使用軟件自定義的阿帕奇系統日誌正則就能夠,文件在/application/elk/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/httpd中

      如圖

        

        其中截圖中序號1的意思是引用序號2中的匹配字段,而後又引用了兩次QS匹配字段,QS匹配字段在同目錄下的grok-patterns中

    c、debuger地址http://grokdebug.herokuapp.com(需FQ)

 2、公司架構設計

  一、每一個ES上面都啓動一個Kibana
  二、Kibana都連本身的ES
  三、前端Nginx負載均衡+ ip_hash  + 驗證 +ACL

 3、rsyslog記錄

  一、系統日誌配置文件在/etc/rsyslog.conf中

  二、配置文件中路徑前面加  -  是爲了避免讓日誌立馬寫到文件中而是先進行緩存,在不少系統優化中都使用到。

    

  三、要打開系統日誌收集功能須要以下操做:

    a、sed -i 's/#*.* @@remote-host:514/*.* @@192.168.30.42:514/g'  /etc/rsyslog.conf

    b、 systemctl restart rsyslog

  四、手動產生系統日誌方法

    logger hehe

4、tcp日誌收集

  一、給tcp端口發送消息方法

    yum install -y nc

    a、方法1:

      echo "wohaoshuai" | nc 192.168.56.12 6666

     b、方法2:

      nc 192.168.30.42 6666 < /etc/resolv.conf

    c、方法3:僞設備的方式

      echo "wohaoshuai" > /dev/tcp/192.168.30.42/6666

5、收集http日誌架構

6、使用elk進行日誌收集需求與思路

  一、需求分析:
    a、訪問日誌: apache訪問日誌、nginx訪問日誌、tomcat file - filter
    b、錯誤日誌:error log 、java日誌 只接收,java異常須要處理
    c、系統日誌:/var/log/* syslog syslog,rsyslog
    d、運行日誌:程序寫的 file,json
    e、網絡日誌:防火牆,交換機,路由器的日誌 syslog

  二、標準化:日誌放哪裏 (/application/logs),格式是什麼(JSON),命名規則 access_log error_log runtime_log 日誌怎麼切割,按天 按小時。access error crontab進行切分 runtime_log,全部的原始文本 rsync到NAS(文件服務器)後刪除最近三天前的。

  三、工具化:如何使用logstash進行收集方案

  四、若是使用redis list 做爲ELKstack的消息隊列,那麼請對全部list key的長度進行監控 llen key_name

    a、根據實際狀況,例如超過10萬就報警。

7、相應logstash配置文件

  一、stdin調試

input{
        stdin{}


}

filter{


}

output{
        #elasticsearch plugin
        elasticsearch{
                hosts => ["192.168.30.41:9200"]
                index => "log-%{+YYYY.MM.dd}"
                manage_template => true

        }
        stdout{
                codec => rubydebug
        }

}

  二、file插件

input{
        file{
        path => ["/var/log/messages","/var/log/secure"]
        #type => "system-log"
        start_position => "beginning"
}
}

filter{


}

output{
        #elasticsearch plugin
        elasticsearch{
                hosts => ["192.168.30.41:9200"]
                index => "system-log-%{+YYYY.MM.dd}"
                manage_template => true

        }
        stdout{
                codec => rubydebug
        }

}

  三、使用type判斷

input{
        file{
        path => ["/var/log/messages","/var/log/secure"]
        type => "system-log"
        start_position => "beginning"
}
        file{
        path => ["/application/elk/elasticsearch/logs/elk-elasticsearch.log"]
        type => "es-log"
        start_position => "beginning"
}
}

filter{


}

output{
        #elasticsearch plugin
        if [type] == "system-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "system-log-%{+YYYY.MM.dd}"
                        manage_template => true

                }
        }

        if [type] == "es-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "es-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }
        stdout{
                codec => rubydebug
        }

}

  四、收集某個目錄下的全部日誌

input{
        file{
        path => ["/var/log/messages","/var/log/secure"]
        type => "system-log"
        start_position => "beginning"
}
        file{
        path => ["/application/elk/elasticsearch/logs/elk-elasticsearch.log"]
        type => "es-log"
        start_position => "beginning"
}
        file{
        path => ["/application/elk/elasticsearch/logs/**/*.log"]
        type => "docker-log"
        start_position => "beginning"
}
}

filter{


}

output{
        #elasticsearch plugin
        if [type] == "system-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "system-log-%{+YYYY.MM.dd}"
                        manage_template => true

                }
        }

        if [type] == "es-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "es-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }
        if [type] == "docker-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "docker-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }
        stdout{
                codec => rubydebug
    }
}

  五、匹配與合併

input{
        stdin {
                codec => multiline
                        {
                                pattern => "^\[" #匹配這個正則
                                negate => true  #匹配到這個正則後,能夠爲true或false
                                what => "previous" #和上面這一行合併起來。 還有一個值爲next,和下面這一行合併起來.
                        }
                }
}

filter{


}

output{
        stdout{
                codec => rubydebug
        }

}

  六、綜合後寫入es

input{
        file{
        path => ["/var/log/messages","/var/log/secure"]
        type => "system-log"
        start_position => "beginning"
}
        file{
        path => ["/application/elk/elasticsearch/logs/elk-elasticsearch.log"]
        type => "es-log"
        start_position => "beginning"
}
        file{
        path => ["/application/elk/elasticsearch/logs/containers/**/*.log"]
        type => "docker-log"
        start_position => "beginning"
        codec => multiline
                 {
                        pattern => "^\{" #匹配這個正則
                        negate => true  #匹配到這個正則後,能夠爲true或false
                       what => "previous" #和上面這一行合併起來。 還有一個值爲next,和下面這一行合併起來.
                }

}
}

filter{


}

output{
        #elasticsearch plugin
        if [type] == "system-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "system-log-%{+YYYY.MM.dd}"
                        manage_template => true

                }
        }

        if [type] == "es-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "es-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }
        if [type] == "docker-log"
        {
                elasticsearch{

hosts => ["192.168.30.41:9200"]
index => "docker-log-%{+YYYY.MM.dd}"
manage_template => true
}
}
stdout{
codec => rubydebug
}

 
 

}

 

   七、收集nginx日誌,並轉換成json格式輸出到es,nginx日誌格式見本章 一.6 

input{
        file{
        path => ["/var/log/nginx/access_log_json.log"]
        start_position => "beginning"
        codec => "json"
        type => "nginx-log"
}
}

filter{


}

output{
        if [type] == "nginx-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "nginx-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }
        stdout{
                codec => rubydebug
        }

}

  八、收集系統日誌

input{
        syslog{
        type => "system-syslog"
        port => 514

        }
}

filter{


}

output{
        elasticsearch{
        hosts => ["192.168.30.41:9200"]
        index => "system-syslog-%{+YYYY.MM}"
}
        stdout{
                codec => rubydebug
        }
}

  九、收集tcp日誌

input{
        tcp{
                type => "tcp"
                port => "6666"
                mode => "server" #還有一個client
}
}

filter{


}

output{

        stdout{
                codec => rubydebug
        }
}

  十、filter匹配與篩選字段

input{
        stdin{
                #輸入內容爲:55.3.244.1 GET /index.html 15824 0.043
                }

}

filter{
  grok {
    match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
  }
}

output{

        stdout{
                codec => rubydebug
        }
}

  十一、使用logstash自帶的匹配規則匹配http日誌

input{
        file {
                type => "http-log"
                path => "/var/log/httpd/access_log"
                start_position => beginning
}
}

filter{
        grok {
                match => {"message" => "%{HTTPD_COMBINEDLOG}" }
}
}

output{
        if [type] == "http-log"
        {
                elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "http-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }
        }

        stdout{
                codec => rubydebug
        }
}

  十二、獲取輸入信息到redis

input{
        stdin{}
}

output{
        redis{
                host => "192.168.30.42"
                port => "6379"
                db => "6"
                data_type => "list"
                key => "demo"
}
        stdout{
                codec => rubydebug
        }
}

  1三、收集http日誌到redis

input{
        file {
                type => "http-log"
                path => "/var/log/httpd/access_log"
                start_position => beginning
}
}

output{
        redis{
                host => "192.168.30.42"
                port => "6379"
                db => "6"
                data_type => "list"
                key => "apache-accesslog"
}
        stdout{
                codec => rubydebug
        }
}

  1四、獲取redis日誌到es

input{
        redis{
                host => "192.168.30.42"
                port => "6379"
                db => "6"
                data_type => "list"
                key => "apache-accesslog"
}
}

filter{
        grok {
                match => {"message" => "%{HTTPD_COMBINEDLOG}" }
}
}


output{

         elasticsearch{
                        hosts => ["192.168.30.41:9200"]
                        index => "redis-log-%{+YYYY.MM.dd}"
                        manage_template => true
                }

        stdout{
                codec => rubydebug
        }
}
相關文章
相關標籤/搜索