ELK+Filebeat+Kafka+ZooKeeper 構建海量日誌分析平臺

日誌分析平臺,架構圖以下:html

架構解讀 : (整個架構從左到右,總共分爲5層)java

第一層、數據採集層node

最左邊的是業務服務器集羣,上面安裝了filebeat作日誌採集,同時把採集的日誌分別發送給兩個logstash服務。linux

第二層、nginx

logstash服務把接受到的日誌通過格式處理,轉存到本地的kafka broker+zookeeper集羣中。web

第三層、數據轉發層spring

這個單獨的Logstash節點會實時去kafka broker集羣拉數據,轉發至ES DataNode。apache

第四層、數據持久化存儲npm

ES DataNode 會把收到的數據,寫磁盤,建索引庫。json

第五層、數據檢索,數據展現

ES Master + Kibana 主要協調ES集羣,處理數據檢索請求,數據展現。

1、服務規劃:

主機名 IP地址 服務 服務做用
ZooKeeper-Kafka-01 10.200.3.85 logstash+Kafka+ZooKeeper 數據處理層,數據緩存層
ZooKeeper-Kafka-02 10.200.3.86 logstash+Kafka+ZooKeeper 數據處理層,數據緩存層
ZooKeeper-Kafka-03 10.200.3.87 Kafka+ZooKeeper 數據緩存層
logstash-to-es-01 10.200.3.88 logstash

轉發層logstash轉發到es

logstash-to-es-02 10.200.3.89 logstash

轉發層logstash轉發到es

Esaster-Kibana 10.200.3.90 ES Master+Kibana 數據持久化存儲和數據展現
ES-DataNode01 10.200.3.91

ES DataNode

數據持久化存儲

ES-DataNode02 10.200.3.92 

ES DataNode

數據持久化存儲

nginx-filebeat 10.20.9.31  nginx-filebeat filebeat收集nginx日誌
java-filebeat 10.20.9.52 java-filebeat filebeat收集tomcat日誌

二、軟件下載和安裝:

全部服務器Java jdk版本必須在1.8以上.

Elasticsearch下載地址:
wget -P /usr/local/src/ https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.2.tar.gz
logstash下載地址:
wget -P /usr/local/src/ https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.tar.gz
kibana下載地址:
wget -P /usr/local/src/ https://artifacts.elastic.co/downloads/kibana/kibana-5.6.2-linux-x86_64.tar.gz
Zookeeper+Kafka下載地址:
#wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
#wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz
filebeat下載:
#curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.8-x86_64.rpm
#rpm -vi filebeat-5.6.8-x86_64.rpm

三、配置安裝服務:

1、Elasticsearch集羣服務安裝:

[root@Esaster-Kibana src]# tar -zxvf elasticsearch-5.6.2.tar.gz -C /usr/local/
[root@Esaster-Kibana src]# cd ..
[root@Esaster-Kibana local]# ln -s elasticsearch-5.6.2 elasticsearch

建立用戶組

[root@Esaster-Kibana local]# groupadd elsearch
[root@Esaster-Kibana local]# useradd -g elsearch elsearch
[root@Esaster-Kibana local]# chown -R elsearch:elsearch  elasticsearch*

設置系統的相關參數,若是不設置參數將會存在相關的問題致使不能啓動

[root@Esaster-Kibana local]# vim /etc/security/limits.conf
# End of file
* soft nproc 65535
* hard nproc 65535
* soft nofile 65536
* hard nofile 65536
elsearch soft memlock unlimited
elsearch hard memlock unlimited 

修改最大線程數的配置

[root@Esaster-Kibana ~]# vim /etc/security/limits.d/20-nproc.conf
*          soft    nproc     65536
root       soft    nproc     unlimited

[root@Esaster-Kibana ~]# vim /etc/sysctl.conf
vm.max_map_count=262144
fs.file-max=65536
[root@Esaster-Kibana ~]# sysctl -p

配置文件

[root@Esaster-Kibana ~]# vim /usr/local/elasticsearch/config/elasticsearch.yml 
network.host: 10.200.3.90
http.port: 9200

啓動程序

[root@Esaster-Kibana ~]# su - elsearch
[elsearch@Esaster-Kibana ~]$ /usr/local/elasticsearch/bin/elasticsearch -d

驗證有沒有啓動成功.

[elsearch@Esaster-Kibana ~]$ curl http://10.200.3.90:9200
{
  "name" : "AUtPyaG",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "5hFyJ-4TShaaevOp4q-TUg",
  "version" : {
    "number" : "5.6.2",
    "build_hash" : "57e20f3",
    "build_date" : "2017-09-23T13:16:45.703Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

至此單臺的Elasticsearch部署完成,若是是集羣的話只須要改elasticsearch.yml文件,添加選項便可!!

Elasticsearch集羣部署

10.200.3.90 ES Master+Kibana

10.200.3.91 ES DataNode

10.200.3.92 ES DataNode

1.將3.90上面的 Elasticsearch複製到另外兩臺節點服務器中,只須要更改配置文件便可.

2.Elasticsearch集羣Master配置文件以下(10.200.3.90:9200):

[elsearch@Esaster-Kibana config]$ cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#集羣的名稱,同一個集羣該值必須設置成相同的
cluster.name: my-cluster
#該節點的名字
node.name: node-1
#該節點有機會成爲master節點
node.master: true
#該節點能夠存儲數據
node.data: true
path.data: /usr/local/elasticsearch/data
path.logs: /usr/local/elasticsearch/logs
bootstrap.memory_lock: true
#設置綁定的IP地址,能夠是IPV4或者IPV6
network.bind_host: 0.0.0.0
#設置其餘節點與該節點交互的IP地址
network.publish_host: 10.200.3.90
#該參數用於同時設置bind_host和publish_host
network.host: 10.200.3.90
#設置節點之間交互的端口號
transport.tcp.port: 9300
#設置是否壓縮tcp上交互傳輸的數據
transport.tcp.compress: true
#設置http內容的最大大小]
http.max_content_length: 100mb
#是否開啓http服務對外提供服務
http.enabled: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.200.3.90:9300","10.200.3.91:9300", "10.200.3.92:9300"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

3.Elasticsearch DataNode01節點(10.200.3.91)

[root@ES-DataNode01 config]# vim elasticsearch.yml |grep -v ^$
# ======================== Elasticsearch Configuration =========================
#集羣的名稱,同一個集羣該值必須設置成相同的
cluster.name: my-cluster
#該節點的名字
node.name: node-2
#該節點有機會成爲master節點
node.master: true
#該節點能夠存儲數據
node.data: true
path.data: /usr/local/elasticsearch/data
path.logs: /usr/local/elasticsearch/logs
bootstrap.memory_lock: true
#設置綁定的IP地址,能夠是IPV4或者IPV6
network.bind_host: 0.0.0.0
#設置其餘節點與該節點交互的IP地址
network.publish_host: 10.200.3.91
#該參數用於同時設置bind_host和publish_host
network.host: 10.200.3.91
#設置節點之間交互的端口號
transport.tcp.port: 9300
#設置是否壓縮tcp上交互傳輸的數據
transport.tcp.compress: true
#設置http內容的最大大小]
http.max_content_length: 100mb
#是否開啓http服務對外提供服務
http.enabled: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.200.3.90:9300","10.200.3.91:9300", "10.200.3.92:9300"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

4.Elasticsearch DataNode02節點(10.200.3.92)

[root@ES-DataNode02 config]# vim elasticsearch.yml |grep -v ^$
# ======================== Elasticsearch Configuration =========================
#集羣的名稱,同一個集羣該值必須設置成相同的
cluster.name: my-cluster
#該節點的名字
node.name: node-3
#該節點有機會成爲master節點
node.master: true
#該節點能夠存儲數據
node.data: true
path.data: /usr/local/elasticsearch/data
path.logs: /usr/local/elasticsearch/logs
bootstrap.memory_lock: true
#設置綁定的IP地址,能夠是IPV4或者IPV6
network.bind_host: 0.0.0.0
#設置其餘節點與該節點交互的IP地址
network.publish_host: 10.200.3.92
#該參數用於同時設置bind_host和publish_host
network.host: 10.200.3.92
#設置節點之間交互的端口號
transport.tcp.port: 9300
#設置是否壓縮tcp上交互傳輸的數據
transport.tcp.compress: true
#設置http內容的最大大小]
http.max_content_length: 100mb
#是否開啓http服務對外提供服務
http.enabled: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.200.3.90:9300","10.200.3.91:9300", "10.200.3.92:9300"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"

5.啓動每一個服務

# /usr/local/elasticsearch/bin/elasticsearch -d

使用curl http://10.200.3.92:9200查看輸入和查看日誌信息.若是沒有錯誤則部署成功.

至此Elasticsearch集羣部署完成.

6.經過cluster API查看集羣狀態:

[root@ES-DataNode02 config]# curl -XGET 'http://10.200.3.90:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "my-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

配置head插件:

首先安裝npm軟件包

參考文檔:http://www.runoob.com/nodejs/nodejs-install-setup.html

Head插件安裝:

參考文檔: https://blog.csdn.net/gamer_gyt/article/details/59077189

Elasticsearch 5.2.x 使用 Head 插件鏈接不上集羣

參考文檔:http://www.javashuo.com/article/p-uchqkksc-s.html

訪問地址:http://10.200.3.90:9100/

2、安裝kibana5.6(10.200.3.90):

#tar -zxvf kibana-5.6.2-linux-x86_64.tar.gz -C /usr/local/
[root@Esaster-Kibana local]# ln -s kibana-5.6.2-linux-x86_64 kibana
[root@Esaster-Kibana local]# cd kibana/config/
[root@Esaster-Kibana config]# vim kibana.yml
server.port: 5601
server.host: "10.200.3.90"
server.name: "Esaster-Kibana"
elasticsearch.url: http://10.200.3.90:9200
啓動kibana服務
[root@Esaster-Kibana config]#/usr/local/kibana/bin/kibana &
訪問地址:
http://10.200.3.90:5601/app/kibana

3、Zookeeper+Kafka集羣部署:

10.200.3.85  Kafka+ZooKeeper

10.200.3.86  Kafka+ZooKeeper

10.200.3.87  Kafka+ZooKeeper

Zookeeper+Kafka下載地址:

#wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
#wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz

1.三臺主機hosts以下,必須保持一致.

# cat /etc/hosts
10.200.3.85 ZooKeeper-Kafka-01 
10.200.3.86 ZooKeeper-Kafka-02
10.200.3.87 ZooKeeper-Kafka-03

2.安裝zookeeper

# 在master節點上操做

[root@ZooKeeper-Kafka-01 src]# tar -zxvf zookeeper-3.4.10.tar.gz -C /usr/local/
[root@ZooKeeper-Kafka-01 src]# cd ..
[root@ZooKeeper-Kafka-01 local]# ln -s zookeeper-3.4.10 zookeeper
[root@ZooKeeper-Kafka-01 local]# cd zookeeper/conf/
[root@ZooKeeper-Kafka-01 conf]# cp zoo_sample.cfg zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
server.1=ZooKeeper-Kafka-01:2888:3888
server.2=ZooKeeper-Kafka-02:2888:3888
server.3=ZooKeeper-Kafka-03:2888:3888

3.建立dataDir目錄建立/tmp/zookeeper

# 在master節點上
[root@ZooKeeper-Kafka-01 conf]# mkdir /tmp/zookeeper
[root@ZooKeeper-Kafka-01 conf]# touch /tmp/zookeeper/myid
[root@ZooKeeper-Kafka-01 conf]# echo 1 > /tmp/zookeeper/myid
3.將zookeeper文件複製到另外兩個節點:
[root@ZooKeeper-Kafka-01 local]# scp -r zookeeper-3.4.10/ 10.200.3.86:/usr/local/
[root@ZooKeeper-Kafka-01 local]# scp -r zookeeper-3.4.10/ 10.200.3.87:/usr/local/

4.在兩個slave節點建立目錄和文件

#ZooKeeper-Kafka-02節點:
[root@ZooKeeper-Kafka-02 local]# ln -s zookeeper-3.4.10 zookeeper
[root@ZooKeeper-Kafka-02 local]# mkdir /tmp/zookeeper
[root@ZooKeeper-Kafka-02 local]# touch /tmp/zookeeper/myid
[root@ZooKeeper-Kafka-02 local]# echo 2 > /tmp/zookeeper/myid
#ZooKeeper-Kafka-03節點
[root@ZooKeeper-Kafka-03 local]# ln -s zookeeper-3.4.10 zookeeper
[root@ZooKeeper-Kafka-03 local]# mkdir /tmp/zookeeper
[root@ZooKeeper-Kafka-03 local]# touch /tmp/zookeeper/myid
[root@ZooKeeper-Kafka-03 local]# echo 3 > /tmp/zookeeper/myid

5.分別在每一個節點上啓動 zookeeper測試:

[root@ZooKeeper-Kafka-01 zookeeper]# ./bin/zkServer.sh start
[root@ZooKeeper-Kafka-02 zookeeper]# ./bin/zkServer.sh start
[root@ZooKeeper-Kafka-03 zookeeper]# ./bin/zkServer.sh start

6.查看狀態:

[root@ZooKeeper-Kafka-01 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@ZooKeeper-Kafka-02 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@ZooKeeper-Kafka-03 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

至此zookeeper集羣安裝成功!!!

Kafka集羣安裝配置

[root@ZooKeeper-Kafka-01 src]# tar -zxvf kafka_2.12-1.1.0.tgz -C /usr/local/
[root@ZooKeeper-Kafka-01 src]# cd ..
[root@ZooKeeper-Kafka-01 local]# ln -s kafka_2.12-1.1.0 kafka

修改server.properties文件

[root@ZooKeeper-Kafka-01 local]# cd kafka/config/
[root@ZooKeeper-Kafka-01 config]# vim server.properties
broker.id=0
listeners=PLAINTEXT://ZooKeeper-Kafka-01:9092
advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-01:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=5
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=ZooKeeper-Kafka-01:2181,ZooKeeper-Kafka-02:2181,ZooKeeper-Kafka-03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
[root@ZooKeeper-Kafka-01 config]# 

將 kafka_2.12-1.1.0 文件夾複製到另外兩個節點下

[root@ZooKeeper-Kafka-01 local]# scp -r kafka_2.12-1.1.0/ 10.200.3.86:/usr/local/
[root@ZooKeeper-Kafka-01 local]# scp -r kafka_2.12-1.1.0/ 10.200.3.87:/usr/local/

並修改每一個節點對應的 server.properties 文件的 broker.id和listeners、advertised.listeners的名稱.

ZooKeeper-Kafka-02主機配置文件以下:

[root@ZooKeeper-Kafka-02 config]# cat server.properties
broker.id=1
listeners=PLAINTEXT://ZooKeeper-Kafka-02:9092
advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-02:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=5
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=ZooKeeper-Kafka-01:2181,ZooKeeper-Kafka-02:2181,ZooKeeper-Kafka-03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true

ZooKeeper-Kafka-03主機配置文件以下:

[root@ZooKeeper-Kafka-03 config]# cat server.properties
broker.id=2
listeners=PLAINTEXT://ZooKeeper-Kafka-03:9092
advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-03:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=5
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=ZooKeeper-Kafka-01:2181,ZooKeeper-Kafka-02:2181,ZooKeeper-Kafka-03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true

啓動服務:

#bin/kafka-server-start.sh config/server.properties &

Zookeeper+Kafka集羣測試

建立topic:

[root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --create --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --replication-factor 3 --partitions 3 --topic test

顯示topic:

[root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --describe --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test

列出topic:

[root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --list --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181
test
[root@ZooKeeper-Kafka-01 kafka]#

建立 producer(生產者);

# 在master節點上 測試生產消息

[root@ZooKeeper-Kafka-01 kafka]# bin/kafka-console-producer.sh --broker-list ZooKeeper-Kafka-01:9092 -topic test
>hello world
>[2018-04-03 12:18:25,545] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
this is example ...
>[2018-04-03 12:19:16,342] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-2. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
welcome to china
>[2018-04-03 12:20:53,141] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

建立 consumer(消費者):

# 在ZooKeeper-Kafka-02節點上 測試消費

[root@ZooKeeper-Kafka-02 kafka]# bin/kafka-console-consumer.sh --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 -topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
this is example ...
hello world
[2018-04-03 12:20:53,145] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
welcome to china

#在ZooKeeper-Kafka-03節點上 測試消費

[root@ZooKeeper-Kafka-03 kafka]# bin/kafka-console-consumer.sh --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 -topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
welcome to china
hello world
this is example ...

而後在 producer 裏輸入消息,consumer 中就會顯示出一樣的內容,表示消費成功!

刪除 topic

[root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --delete --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test

啓動和關閉服務:

#啓動服務:
bin/kafka-server-start.sh config/server.properties &
#中止服務:
bin/kafka-server-stop.sh

至此Zookeeper+Kafka集羣配置成功.

4、logstash安裝和配置

hlogstash-to-kafka端logstash安裝配置(logstash從filebeat讀取日誌後寫入到kafka中):

主機(10.200.3.85 10.200.3.86)

[root@ZooKeeper-Kafka-01 src]#  tar -zxvf logstash-5.6.2.tar.gz -C [root@ZooKeeper-Kafka-01 src]#  cd /usr/local/
[root@ZooKeeper-Kafka local]# ln -s logstash-5.6.2 logstash-to-kafka

[root@ZooKeeper-Kafka-01 config]# cat logstash.conf
input {
  beats {
    host => "0.0.0.0"
    port => 5044
  }
}

filter {
  if [log_topic] !~ "^nginx_" {
    drop {}
  }
  ruby {
    code => "
      require 'date'
      event.set('log_filename',event.get('source').gsub(/\/.*\//,'').downcase)
      #tmptime = event.get('message').split(']')[0].delete('[')
      #timeobj = DateTime.parse(tmptime)
      #event.set('log_timestamp',tmptime)
      #event.set('log_date',timeobj.strftime('%Y-%m-%d'))
      #event.set('log_year',timeobj.year)
      #event.set('log_time_arr',[timeobj.year,timeobj.month,timeobj.day,timeobj.hour,timeobj.minute])
        "
    }
    #date {
    #    match => [ "log_timestamp" , "ISO8601" ]
    #}
     mutate {
        remove_field => [ "source" ]
        remove_field => [ "host" ]
        #remove_field =>["message"]

    }

}


output {

  stdout {
    codec => rubydebug

  }
  kafka {
    bootstrap_servers => "10.200.3.85:9092,10.200.3.86:9092,10.200.3.87:9092"
    topic_id => "%{log_topic}"
    codec => json {}
  }

# elasticsearch {
#    hosts => ["10.200.3.90:9200","10.200.3.91:9200","10.200.3.92:9200"]
#    index => "logstash-%{log_filename}-%{+YYYY.MM.dd}"
#    template_overwrite => true
#
#  }

}
nginx日誌過濾和轉發
[root@ZooKeeper-Kafka-02 config]# cat logstash.conf
input {
  beats {
    host => "0.0.0.0"
    port => 5044
  }
}


filter {
  if [log_topic] !~ "^tomcat_"{
    drop {}
  }
  ruby {
    code => "
      require 'date'
      event.set('log_filename',event.get('source').gsub(/\/.*\//,'').downcase)
      #tmptime = event.get('message').split(']')[0].delete('[')
      #timeobj = DateTime.parse(tmptime)
      #event.set('log_timestamp',tmptime)
      #event.set('log_date',timeobj.strftime('%Y-%m-%d'))
      #event.set('log_year',timeobj.year)
      #event.set('log_time_arr',[timeobj.year,timeobj.month,timeobj.day,timeobj.hour,timeobj.minute])
        "
    }

    #date {
    #    match => [ "log_timestamp" , "ISO8601" ]
    #}
   mutate {
        remove_field => [ "host" ]
        remove_field =>["source"]

    }

}


output {

  stdout {
    codec => rubydebug

  }
  kafka {
    bootstrap_servers => "10.200.3.85:9092,10.200.3.86:9092,10.200.3.87:9092"
    topic_id => "%{log_topic}"
    codec => json {}
 }

}

[root@ZooKeeper-Kafka-02 config]# 
tomcat日誌收集及轉發

轉發層logstash安裝,logstash從kafka讀取日誌寫入到es中(10.200.3.8八、10.200.3.89)

[root@logstash-01 src]# tar -zxvf logstash-5.6.2.tar.gz -C /usr/local/
[root@logstash-01 src]# cd /usr/local/
[root@logstash-01 local]# ln -s logstash-5.6.2 logstash-to-es
[root@logstash-01 config]# cat logstash.conf 
input {
  kafka {
    bootstrap_servers => "ZooKeeper-Kafka-01:9092,ZooKeeper-Kafka-02:9092,ZooKeeper-Kafka-03:9092"
    #bootstrap_servers => "10.200.3.85:9092,10.200.3.86:9092,10.200.3.87:9092"
    group_id => "nginx_logs"
    topics  => ["nginx_logs"]
    consumer_threads => 5 
    decorate_events => true 
    codec => json {}
  }
}

filter {
  if [log_filename] =~ "_access.log"  {
    grok {
      patterns_dir => "./patterns"
      match => { "message" => "%{NGINXACCESS}" }

        }
  } else {
    drop {}
  }

  mutate {
    remove_field => [ "log_time_arr" ]
  }

}



output {

  stdout {
    codec => rubydebug

  }

  elasticsearch {
    hosts => ["10.200.3.90:9200","10.200.3.91:9200","10.200.3.92:9200"]
    index => "logstash-%{log_filename}-%{+YYYY.MM.dd}"
    template_overwrite => true
    flush_size=>2000

  }

}

[root@logstash-01 config]# 
從kafka讀取nginx日誌,轉發存儲到es中
[root@logstash-02 patterns]# cat nginx_access 
ERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS \[%{TIMESTAMP_ISO8601:log_timestamp1}\] %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}

#####################################
#Nginx.conf端配置格式:
log_format  main  '[$time_iso8601] $remote_addr - $remote_user "$request" '
            '$status $body_bytes_sent "$http_referer" '
            '"$http_user_agent" "$http_x_forwarded_for"';
Nginx日誌格式以下
[root@logstash-02 config]# cat logstash.conf
input {
  kafka {
    bootstrap_servers => "ZooKeeper-Kafka-01:9092,ZooKeeper-Kafka-02:9092,ZooKeeper-Kafka-03:9092"
    #bootstrap_servers => "10.200.3.85:9092,10.200.3.86:9092,10.200.3.87:9092"
    group_id => "tomcat_logs"
    topics  => ["tomcat_logs"]
    consumer_threads => 5
    decorate_events => true
    codec => json {}
  }

}

filter {
  
  grok {
    patterns_dir => "./patterns"
    match => { "message" => "%{CATALINALOG}" }
        
    } 
        
  mutate {
    remove_field => [ "log_time_arr" ]
  }

}

output {

  stdout {
    codec => rubydebug

  }
  

  elasticsearch {
    hosts => ["10.200.3.90:9200","10.200.3.91:9200","10.200.3.92:9200"]
    index => "logstash-%{project_name}-%{log_filename}-%{+YYYY.MM.dd}"
    template_overwrite => true
    flush_size=>2000

  }

}

[root@logstash-02 config]# 
從kafka讀取tomcat日誌,轉發存儲到es中
[root@logstash-02 logstash-to-es]# cat patterns/java_access 
JAVACLASS (?:[a-zA-Z$_][a-zA-Z$_0-9]*\.)*[a-zA-Z$_][a-zA-Z$_0-9]*
#Space is an allowed character to match special cases like 'Native Method' or 'Unknown Source'
JAVAFILE (?:[A-Za-z0-9_. -]+)
#Allow special <init> method
JAVAMETHOD (?:(<init>)|[a-zA-Z$_][a-zA-Z$_0-9]*)
#Line number is optional in special cases 'Native method' or 'Unknown source'
JAVASTACKTRACEPART %{SPACE}at %{JAVACLASS:class}\.%{JAVAMETHOD:method}\(%{JAVAFILE:file}(?::%{NUMBER:line})?\)
# Java Logs
JAVATHREAD (?:[A-Z]{2}-Processor[\d]+)
JAVACLASS (?:[a-zA-Z0-9-]+\.)+[A-Za-z0-9$]+
JAVAFILE (?:[A-Za-z0-9_.-]+)
JAVASTACKTRACEPART at %{JAVACLASS:class}\.%{WORD:method}\(%{JAVAFILE:file}:%{NUMBER:line}\)
JAVALOGMESSAGE (.*)
# MMM dd, yyyy HH:mm:ss eg: Jan 9, 2014 7:13:13 AM
CATALINA_DATESTAMP %{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM)
# yyyy-MM-dd HH:mm:ss,SSS ZZZ eg: 2014-01-09 17:32:25,527 -0800
TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) %{ISO8601_TIMEZONE}
CATALINALOG %{CATALINA_DATESTAMP:timestamp} %{JAVACLASS:class} %{JAVALOGMESSAGE:logmessage}
# 2014-01-09 20:03:28,269 -0800 | ERROR | com.example.service.ExampleService - something compeletely unexpected happened...
TOMCATLOG \[%{TOMCAT_DATESTAMP:timestamp}\] \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}


# 2016-04-10 07:19:16-|INFO|-Root WebApplicationContext: initialization started
MYTIMESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
MYLOG %{MYTIMESTAMP:mytimestamp}-\|%{LOGLEVEL:level}\|-%{JAVALOGMESSAGE:logmsg}

ACCESSIP (?:[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})
ACCESSTIMESTAMP %{MONTHDAY}\/%{MONTH}\/20%{YEAR}:%{HOUR}:%{MINUTE}:%{SECOND} %{ISO8601_TIMEZONE}
HTTPMETHOD (GET|POST|PUT|DELETE)
PRJNAME ([^\s]*)
HTTPVERSION (https?\/[0-9]{1}\.[0-9]{1})
STATUSCODE ([0-9]{3})
# 192.168.1.101 - - [10/Apr/2016:08:31:34 +0800] "GET /spring-mvc-showcase HTTP/1.1" 302 -
ACCESSLOG %{ACCESSIP:accIP}\s-\s\-\s\[%{ACCESSTIMESTAMP:accstamp}\]\s"%{HTTPMETHOD:method}\s\/%{PRJNAME:prjName}\s%{JAVALOGMESSAGE:statusCode}

JAVA_OUT_COMMON \[%{TIMESTAMP_ISO8601:log_timestamp1}\] \| %{NOVERTICALBAR:loglevel} \| %{NOVERTICALBAR:codelocation} \| %{NOVERTICALBAR:threadid} \| %{NOVERTICALBAR:optype} \| %{NOVERTICALBAR:userid} \| %{NOVERTICALBAR:phone} \| %{NOVERTICALBAR:fd1} \| %{NOVERTICALBAR:fd2} \| %{NOVERTICALBAR:fd3} \| %{NOVERTICALBAR:fd4} \| %{NOVERTICALBAR:fd5} \| %{NOVERTICALBAR:fd6} \| %{NOVERTICALBAR:fd7} \| %{NOVERTICALBAR:fd8} \| %{NOVERTICALBAR:fd9} \| %{NOVERTICALBAR:fd10} \| %{NOVERTICALBAR:fd11} \| %{NOVERTICALBAR:fd12} \| %{NOVERTICALBAR:fd13} \| %{NOVERTICALBAR:fd14} \| %{NOVERTICALBAR:fd15} \| %{NOVERTICALBAR:fd16} \| %{NOVERTICALBAR:fd17} \| %{NOVERTICALBAR:fd18} \| %{GREEDYDATA:msg}
[root@logstash-02 logstash-to-es]# 
tomcat中papatterns日誌格式

啓動logstash服務

#./bin/logstash -f logstash.conf

至此logstash安裝和配置完成.

5、客戶端日誌收集

filebeat安裝及收集Nginx端日誌(10.20.9.31):

官方文檔: https://www.elastic.co/guide/en/beats/filebeat/5.6/filebeat-installation.html

下載安裝軟件:

#curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.8-x86_64.rpm
#rpm -vi filebeat-5.6.8-x86_64.rpm

配置:

[root@v05-app-nginx01 ~]# vim /etc/filebeat/filebeat.yml 
###################### Filebeat Configuration #########################

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

- input_type: log
  document_type: nginx_access

  paths:
    - /opt/ytd_logs/nginx/*_access.log

  fields_under_root: true


  fields:
    log_source: 10.20.9.31
    log_topic: nginx_logs
  tags: ["nginx","webservers"]
  multiline:
    pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}T
    match: after
    negate: true

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.200.3.85:5044"]

測試配置文件:

/usr/bin/filebeat.sh -configtest –e

啓動服務

# /etc/init.d/filebeat start

 filebeat安裝及收集tomcat 端日誌(10.20.9.52):

1.安裝略,配置文件以下.

[root@v05-app-test01 filebeat]# vim filebeat.yml
###################### Filebeat Configuration #########################

#=========================== Filebeat prospectors =============================
#
filebeat.prospectors:

- input_type: log
  document_type: java_access

  paths:
    - /opt/logs/portal/catalina.2018-*.out

  fields_under_root: true


  fields:
    log_source: 10.20.9.52
    log_topic: tomcat_logs
    project_name: app_01
  tags: ["tomcat","javalogs"]
  multiline:
    #對日誌按時間進行一條條分割
    #pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}T
    pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}[ ][0-2][0-9]:[0-9]{2}:[0-9]{2}
    #pattern: ^\[
    match: after
    negate: true


#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.200.3.86:5044"]

至此,ELK+Filebeat+Kafka+ZooKeeper日誌收集系統部署完成!!!

 

kibana使用的lucene查詢語法:

連接文檔:http://www.javashuo.com/article/p-ogjmtkhn-cs.html

相關文章
相關標籤/搜索