ELK + kafka + filebeat +kibana

架構說明

app-server(filebeat) -> kafka -> logstash -> elasticsearch -> kibana

服務器用途說明

系統基礎環境
# cat /etc/redhat-release 
CentOS release 6.5 (Final)

# uname -r
2.6.32-431.el6.x86_64


192.168.162.51    logstash01
192.168.162.53    logstash02
192.168.162.55    logstash03
192.168.162.56    logstash04
192.168.162.57    logstash05

192.168.162.58    elasticsearch01
192.168.162.61    elasticsearch02
192.168.162.62    elasticsearch03
192.168.162.63    elasticsearch04
192.168.162.64    elasticsearch05

192.168.162.66    kibana
192.168.128.144   kafka01
192.168.128.145   kafka02
192.168.128.146   kafka03

192.168.138.75    filebeat,weblogic

下載各類須要的軟件包

elastic下載地址 爲6.0-beta2版本的java

kafka下載地址node

java下載地址linux

elasticsearch-6.0.0-beta2.rpm
filebeat-6.0.0-beta2-x86_64.rpm
grafana-4.4.3-1.x86_64.rpm
heartbeat-6.0.0-beta2-x86_64.rpm
influxdb-1.3.5.x86_64.rpm
jdk-8u144-linux-x64.rpm
kafka_2.12-0.11.0.0.tgz
kibana-6.0.0-beta2-x86_64.rpm
logstash-6.0.0-beta2.rpm

部署安裝Filebeat

安裝Filebeat

在應用服務器上安裝filebeatweb

# yum localinstall filebeat-6.0.0-beta2-x86_64.rpm -y

安裝完成以後,filebeat 經過RPM安裝的目錄:apache

# ls /usr/share/filebeat/
bin  kibana  module  NOTICE  README.md  scripts

配置文件爲bootstrap

/etc/filebeat/filebeat.yml

配置Filebeat

#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /data1/logs/apphfpay_8086_domain/apphfpay.yiguanjinrong.yg.*

  multiline.pattern: '^(19|20)\d\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) [012][0-9]:[0-6][0-9]:[0-6][0-9]'
  multiline.negate: true
  multiline.match: after

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#----------------------------- Kafka output ---------------------------------
output.kafka:
  hosts: ['192.168.128.144:9092','192.168.128.145:9092','192.168.128.146:9092']
  topic: 'credit'

啓動Filebeat,並查看日誌有無報錯信息

#/etc/init.d/filebeat start

日誌文件api

/var/log/filebeat/filebeat

安裝部署Kafka和Zookeeper

分別設置三臺kafka服務器的主機名服務器

# host=kafka01  && hostname $host && echo "192.168.128.144" $host >>/etc/hosts
# host=kafka02  && hostname $host && echo "192.168.128.145" $host >>/etc/hosts
# host=kafka03  && hostname $host && echo "192.168.128.146" $host >>/etc/hosts

安裝java架構

# yum localinstall jdk-8u144-linux-x64.rpm -y

解壓kafka壓縮包並將壓縮包移動到 /usr/local/kafkaoracle

# tar fzx kafka_2.12-0.11.0.0.tgz
# mv kafka_2.12-0.11.0.0 /usr/local/kafka

配置Kafka和Zookeeper

# pwd
/usr/local/kafka/config
# ls
connect-console-sink.properties    connect-log4j.properties       server.properties
connect-console-source.properties  connect-standalone.properties  tools-log4j.properties
connect-distributed.properties     consumer.properties            zookeeper.properties
connect-file-sink.properties       log4j.properties
connect-file-source.properties     producer.properties

修改配置文件

# grep -Ev "^$|^#" server.properties 

broker.id=1
delete.topic.enable=true
listeners=PLAINTEXT://192.168.128.144:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data1/kafka-logs
num.partitions=12
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181
zookeeper.connection.timeout.ms=6000


# grep -Ev "^$|^#" consumer.properties

zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

# grep -Ev "^$|^#" producer.properties 
bootstrap.servers=192.168.128.144:9092,192.168.128.145:9092,192.168.128.146:9092
compression.type=none

啓動Zookeeper And Kafka

首先檢測配置有沒有問題

啓動zookeeper,查看有無報錯

# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties 

啓動kafka,查看有無報錯

# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties 


若是沒有報錯,若是須要zookeeper和kafka 那就先啓動zookeeper 在啓動kafka (固然也能夠寫一個啓動腳本)

# nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

# nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &

檢查啓動狀況 默認開啓的端口爲 2181(zookeeper) 和 9202(kafka)

建立topic

# bin/kafka-topics.sh --create --zookeeper zk01.yiguanjinrong.yg:2181 --replication-factor 1 --partition 1 --topic test

Created topic "test".

查看建立的topic

# bin/kafka-topics.sh --list --zookeeper zk01.yiguanjinrong.yg:2181

test

模擬客戶端發送消息

# bin/kafka-console-producer.sh --broker-list 192.168.128.144:9092 --topic test
肯定後 輸入一些內容,而後肯定

模擬客戶端接收信息(若是能正常接收到信息說明kafka部署正常)

# bin/kafka-console-consumer.sh --bootstrap-server 192.168.128.144:9202 --topic test --from-beginning

刪除topic

# bin/kafka-topics.sh --delete --zookeeper zk01.yiguanjinrong.yg:2181 --topic test

安裝部署Logstash

安裝Logstash

# yum localinstall jdk-8u144-linux-x64.rpm -y
# yum localinstall logstash-6.0.0-beta2.rpm -y

logstash的安裝目錄和配置文件的目錄(默認沒有配置文件)分別爲

# /usr/share/logstash/   安裝完成以後,並無把bin目錄添加到環境變量中

# /etc/logstash/conf.d/

Logstash的配置文件信息

# cat /etc/logstash/conf.d/logstash.conf 

input {
  kafka {
    bootstrap_servers => "192.168.128.144:9092,192.168.128.145:9092,192.168.128.145:9092"
    topics => ["credit"]
    group_id => "test-consumer-group"
    codec => "plain"
    consumer_threads => 1
    decorate_events => true

  }
}

output {
  elasticsearch {
    hosts => ["192.168.162.58:9200","192.168.162.61:9200","192.168.162.62:9200","192.168.162.63:9200","192.168.162.64:9200"]
    index => "logs-%{+YYYY.MM.dd}"
    workers => 1
  }
}

檢查配置文件是否正確

# /usr/share/logstash/bin/logstash -t --path.settings /etc/logstash/  --verbose
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

因爲logstash 默認沒有啓動腳本,可是已經給出建立方法

查看腳本使用幫助

# bin/system-install --help

Usage: system-install [OPTIONSFILE] [STARTUPTYPE] [VERSION]

NOTE: These arguments are ordered, and co-dependent

OPTIONSFILE: Full path to a startup.options file
OPTIONSFILE is required if STARTUPTYPE is specified, but otherwise looks first
in /usr/share/logstash/config/startup.options and then /etc/logstash/startup.options
Last match wins

STARTUPTYPE: e.g. sysv, upstart, systemd, etc.
OPTIONSFILE is required to specify a STARTUPTYPE.

VERSION: The specified version of STARTUPTYPE to use.  The default is usually
preferred here, so it can safely be omitted.
Both OPTIONSFILE & STARTUPTYPE are required to specify a VERSION.

# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

建立以後文件爲: /etc/init.d/logstash  要注意修改日誌目錄地址,建議把log放置在 /var/log/logstash

# mkdir -p /var/log/logstash && chown logstash.logstash -R /var/log/logstash

下面是須要修改的部分

start() {

  # Ensure the log directory is setup correctly.
  if [ ! -d "/var/log/logstash" ]; then 
    mkdir "/var/log/logstash"
    chown "$user":"$group" -R "/var/log/logstash"
    chmod 755 "/var/log/logstash"
  fi


  # Setup any environmental stuff beforehand
  ulimit -n ${limit_open_files}

  # Run the program!
  nice -n "$nice" \
  chroot --userspec "$user":"$group" "$chroot" sh -c "
    ulimit -n ${limit_open_files}
    cd \"$chdir\"
    exec \"$program\" $args
  " >> /var/log/logstash/logstash-stdout.log 2>> /var/log/logstash/logstash-stderr.log &

  # Generate the pidfile from here. If we instead made the forked process
  # generate it there will be a race condition between the pidfile writing
  # and a process possibly asking for status.
  echo $! > $pidfile

  emit "$name started"
  return 0
}

啓動Logstash,並查看日誌有無報錯

# /etc/init.d/logstash start

安裝部署Elasticsearch羣集

安裝Elasticsearch

# yum localinstall jdk-8u144-linux-x64.rpm -y
# yum localinstall elasticsearch-6.0.0-beta2.rpm -y

配置Elasticsearch

安裝路徑

# /usr/share/elasticsearch/

配置文件

# /etc/elasticsearch/elasticsearch.yml

Elasticsearch 配置文件信息

# cat elasticsearch.yml | grep -Ev "^$|^#"

cluster.name: elasticsearch
node.name: es01  #其餘節點修改相應的節點名
path.data: /data1/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.system_call_filter: false
network.host: 192.168.162.58 #其餘節點修改地址信息
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.162.58", "192.168.162.61", "192.168.162.62", "192.168.162.63", "192.168.162.64"]
discovery.zen.minimum_master_nodes: 3
node.master: true
node.data: true
transport.tcp.compress: true

啓動Elasticsearch

# mkdir -p /var/log/elasticsearch && chown elasticsearch.elasticsearch -R /var/log/elasticsearch

# /etc/init.d/elasticsearch start

安裝部署Kibana

安裝Kibana

# yum localinstall kibana-6.0.0-beta2-x86_64.rpm -y

Kibana配置文件信息

# cat /etc/kibana/kibana.yml | grep -Ev "^$|^#"

server.port: 5601
server.host: "192.168.162.66"
elasticsearch.url: "http://192.168.162.58:9200"  #elasticsearch羣集地址,任意一個es節點的地址便可
kibana.index: ".kibana"
pid.file: /var/run/kibana/kibana.pid

啓動Kibana

修改kibana啓動腳本

# mkdir -p /var/run/kibana
# chown kibana.kibana -R /var/run/kibana

修改kibana啓動腳本部分

start() {

  # Ensure the log directory is setup correctly.
  [ ! -d "/var/log/kibana/" ] && mkdir "/var/log/kibana/"
  chown "$user":"$group"  "/var/log/kibana/"
  chmod 755 "/var/log/kibana/"


  # Setup any environmental stuff beforehand


  # Run the program!

  chroot --userspec "$user":"$group" "$chroot" sh -c "

    cd \"$chdir\"
    exec \"$program\" $args
  " >> /var/log/kibana/kibana.stdout 2>> /var/log/kibana/kibana.stderr &

  # Generate the pidfile from here. If we instead made the forked process
  # generate it there will be a race condition between the pidfile writing
  # and a process possibly asking for status.
  echo $! > $pidfile

  emit "$name started"
  return 0
}

啓動 

#/etc/init.d/kibana start

如今能夠訪問Kibana頁面 Http://192.168.162.66:5601

相關文章
相關標籤/搜索