基於Centos7的ELK + filebeat日誌分析(java weblogic平臺)搭建實戰

網上有不少的ELK搭建教程,但大多數文章介紹的ELK版本比較舊,內容也比較零散。本文基於最新版本的elastic日誌集中分析套件,以及redis做爲緩衝,完整記錄一套ELK架構搭建過程。並實現了對生產環境核心系統使用的Oracle weblogic + java system out日誌的分析、處理。
根據官方的介紹,已推薦使用filebeat代替logstash的forward功能。因此本次搭建架構功能規劃以下:
filebeat:負責日誌文件監控與數據採集;
redis:負責日誌數據的緩衝;
logstash:負責日誌數據的分析、處理;
elasticsearch:日誌數據搜索;
kibana:展現css

1.系統環境html

CentOS Linux release 7.2.1511

2.Filebeat+ELK 軟件包java

elasticsearch-5.1.1.rpm
filebeat-5.1.1-x86_64.rpm
kibana-5.1.1-x86_64.rpm
logstash-5.1.1.rpm    
redis-3.rpm
java-1.8-jdk
download url:https://www.elastic.co/

3.配置過程git

  • Filebeat.yml配置文件github

實現對weblogic的access.log,以及系統的nohup,java.system.out.println數據的監控。
日誌示例:web

accesss.logredis

10.10.10.10 - - [11/一月/2017:09:24:15 +0800] "POST /hx/common/index.jsp HTTP/1.1" 200 41

nohup.outjson

2016-08-24 23:00:31,761 INFO com.xxx.utility.ExeSQL.__AW_getOneValue - ExecSQL : select xxx From yyy where no='00000000000000000000'
2016-08-240.000000000000000000000null
null
2016-08-24 23:00:31,764 INFO com.xxx.utility.ExeSQL.__AW_execSQL - ExecSQL : select xxx From yyyy where no='00000000000000000000'
CalType ===========null
#####calOneDuty:select xxx From yyyy where no=? and pno=mainpno
### BindSQL = select xxx From yyyy where no= '00000000000000000000'  and pno=mainpno
2016-08-24 23:00:31,770 INFO com.xxxx.utility.ExeSQL.__AW_execSQL -

-/etc/filebeat/filebeat.yml緩存

filebeat.prospectors:
    -
      input_type: log
      paths:
        - /pathto/weblogic/nohup.out
      encoding: gbk
      document_type: javaout
      fields:
        app_id: hxxxt
      multiline.pattern: '^(19|20)\d\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) [012][0-9]:[0-6][0-9]:[0-6][0-9]'
      multiline.negate: true
      multiline.match: after
      multiline.max_lines: 1000
      multiline.timeout: 60
    - 
      input_type: log
      paths:
        - /pathto/weblogic/access.log
      encoding: gbk
      document_type: httpdlog
      exclude_lines: '\.(gif|css|js|ico) HTTP'
      fields:
        app_id: hxxt
    
    output.logstash:  #系統壓力小的能夠直接輸出到logstash
      hosts: ["localhost:5044"]
    #  enabled: false
      pipelining: 0
      index: "filebeat"
    output.redis: #系統壓力大的能夠直接輸出到redis,再轉發logstash
      hosts: ["localhost:6379"]
      key: "filebeat"
      enabled: false   #關閉輸出
    output.file:   #主要用於調試
      path: "/tmp"
      filename: filebeat.out
      number_of_files: 7
      rotate_every_kb: 10000 
      enabled: false   #關閉輸出
  • logstash配置bash

logstash自己不帶啓動腳本,爲了便於使用,本身編寫了一個啓動腳本。

#!/bin/bash
/usr/share/logstash/bin/logstash \
    --path.settings /etc/logstash \
    --config.reload.automatic \
    $@

/etc/logstash/conf.d/redis.conf #redis緩存日誌配置

input {
    redis {
        data_type => "list" #logstash redis插件工做方式

        key => "filebeat" #監聽的鍵值

        host => "127.0.0.1" #redis地址

        port => 6379 #redis端口號
        add_field => {   #提取filbeat寫入redis的日誌源主機名json格式,不然output host爲空
               host => "%{[beat][hostname]}"       
                  }
    }

}
filter{}
output {
stdout{}

/etc/logstash/conf.d/beat.conf #filebeat 配置匹配httpdlog中文日期格式

input {
    beats {
        port => "5044"
    }
}

filter {

  if [type] == "javaout" {
    grok {
       match => { "message" => "(%{TIMESTAMP_ISO8601:logdatetime} %{LOGLEVEL:level} %{JAVAFILE:class} - %{GREEDYDATA:logmessage})|%{GREEDYDATA:logmessage}" }
       remove_field => [ "message" ]
    }
    date {
      timezone => "Asia/Shanghai"
      match => ["logdatetime","yyyy-MM-dd HH:mm:ss,SSS"]
      remove_field => [ "logdatetime" ]
    }
  }
  if [type] == "httpdlog" {
    #replace access log chinese charset month word,charset zh_cn.utf-8
      mutate { gsub => [
      "message","\u4E00\u6708","Jan",
      "message","\u4E8C\u6708","Fed",
      "message","\u4E09\u6708","Mar",
      "message","\u56DBC\u6708","Apr",
      "message","\u4E94\u6708","May",
      "message","\u516DC\u6708","June",
      "message","\u4E03\u6708","July",
      "message","\u516B\u6708","Aug",
      "message","\u4E5D\u6708","Sept",
      "message","\u5341\u6708","Oct",
      "message","\u5341\u4E00\u6708","Nov",
      "message","\u5341\u4E8C\u67082","Dec" ] }

    grok {
      match => { "message" => "%{COMMONAPACHELOG}" }
      remove_field => [ "message" ]
    }
    mutate {
      gsub => ["request", "\?.*$",""]
    }
    date {
      locale => "en"
      timezone => "Asia/Shanghai"
      match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
      remove_field => [ "timestamp" ]
   }

  }
}

output {
elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "%{type}-%{+YYYY.MM.dd}"
        document_type => "%{type}"
        #flush_size => 2000
        #idle_flush_time => 10
        #sniffing => true
        #template_overwrite => true
    }
file {  #主要用於調試
  path => "/tmp/logstash.out"
 }
}
  • Elastic與kibana、redis默認配置便可

4.啓動相應軟件

systemctl start elasticsearch
systemctl start kibana
nohup bin/logstash.sh &
systemctl start redis
systemctl start filebeat

5.登陸kibana,查看

http://host-severip:5601

clipboard.png

clipboard.png

5。 踩過的坑:
一、國內好多早期的應用系統都是採用中文GBK編碼,(如今估計也是一大坨),LANG=zh_CN.GBK,這會致使應用程序的在寫日期時,使用中文格式,例如,本次遇到的「11/一月/2017:09:24:15 +0800」,ELK內部以統一使用UTF8編碼,且不支持中文字符轉時間類型。鬱悶了好久,想本身寫插件的心都有,後來經過在filebeat設置字符集轉換爲utf8的,使用unicode regexp匹配,才解決。

參考資料:

https://technology.amis.nl/20...
https://kuther.net/blog/index...
http://www.learnes.net/index....
https://www.elastic.co/
https://github.com/logstash-p...

相關文章
相關標籤/搜索