在筆者最開始維護的日誌服務中,日質量較小,沒有接入kafka。隨着業務規模擴增,日質量不斷增加,接入到日誌服務的產品線不斷增多,遇到流量高峯,寫入到es的性能就會下降,cpu打滿,隨時都有集羣宕機的風險。所以,接入消息隊列,進行削峯填谷就迫在眉睫。本文主要介紹在EFK的基礎上如何接入kafka,並作到向前兼容。html
主要參考文章:【zookeeper安裝指南】
因爲是要線上搭建集羣,爲避免單點故障,就須要部署至少3個節點(取決於多數選舉機制)。java
進入要下載的版本的目錄,選擇.tar.gz文件下載apache
使用tar解壓要安裝的目錄便可,以3.4.5版本爲例
這裏以解壓到/home/work/common,實際安裝根據本身的想安裝的目錄修改(注意若是修改,那後邊的命令和配置文件中的路徑都要相應修改)json
tar -zxf zookeeper-3.4.5.tar.gz -C /home/work/common
在主目錄下建立data和logs兩個目錄用於存儲數據和日誌:bootstrap
cd /home/work/zookeeper-3.4.5 mkdir data mkdir logs
在conf目錄下新建zoo.cfg文件,寫入以下配置:安全
tickTime=2000 dataDir=/home/work/common/zookeeper1/data dataLogDir=/home/work/common/zookeeper1/logs clientPort=2181 initLimit=5 syncLimit=2 server.1=192.168.220.128:2888:3888 server.2=192.168.222.128:2888:3888 server.3=192.168.223.128:2888:3888
在zookeeper1的data/myid配置以下:curl
echo '1' > data/myid
zookeeper2的data/myid配置以下:socket
echo '2' > data/myid
zookeeper2的data/myid配置以下:工具
echo '3' > data/myid
進入bin目錄,啓動、中止、重啓分和查看當前節點狀態(包括集羣中是何角色)別執行:性能
./zkServer.sh start ./zkServer.sh stop ./zkServer.sh restart ./zkServer.sh status
zookeeper集羣搭建完成以後,根據實際狀況開始部署kafka。以部署2個broker爲例。
下載並解壓包:
curl -L -O http://mirrors.cnnic.cn/apache/kafka/0.9.0.0/kafka_2.10-0.9.0.0.tgz
tar zxvf kafka_2.10-0.9.0.0.tgz
進入kafka安裝工程根目錄編輯config/server.properties
#不一樣的broker對應的id不能重複 broker.id=1 delete.topic.enable=true inter.broker.protocol.version=0.10.0.1 log.message.format.version=0.10.0.1 listeners=PLAINTEXT://:9092,SSL://:9093 auto.create.topics.enable=false ssl.key.password=test ssl.keystore.location=/home/work/certificate/server-keystore.jks ssl.keystore.password=test ssl.truststore.location=/home/work/certificate/server-truststore.jks ssl.truststore.password=test num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/home/work/data/kafka/log num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=72 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=192.168.220.128:2181,192.168.222.128:2181,192.168.223.128:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0
進入kafka的主目錄
nohup sh bin/kafka-server-start.sh config/server.properties > /dev/null 2>&1 &
首先建立一個topic:topic_1
sh bin/kafka-topics.sh --create --topic topic_1 --partitions 2 --replication-factor 2 --zookeeper 192.168.220.128:2181
能夠先檢查一下是否建立成功:
sh bin/kafka-topics.sh --list --zookeeper 192.168.220.128:2181
起兩個終端,一個做爲producer,一個做爲consumer
生產消息:
bin/kafka-console-producer.sh --topic topic_1 --broker-list 192.168.220.128:9092,192.168.223.128:9092
消費消息:
sh bin/kafka-console-consumer.sh --bootstrap-server 192.168.220.128:9092,192.168.223.128:9092 --topic topic_1
好了,上面的調通了,萬里長征第一步就走完了。
在以前的EFK中是經過證書進行安全加固的,因此要先爲接入kafka準備一下相關的證書。要確保給kafka生成的證書和給efk生成的證書是同一個根證書。關於證書的生成,筆者會寫文章專門介紹。主要包括:
那麼做爲kafka的輸入(filebeat)和輸出(logstash),都須要kafka的client證書,kafka的broker須要的是服務端證書。
須要注意的是,filebeat配置的是pem證書,kafka和logstash的kafka-input插件用的是jks證書~~~所以,證書生成工具最好須要可以同時生成這兩種證書。
在fields中添加log_topic字段,指定寫入的topic
fields: module: sonofelice type: debug log_topic: topic_1 language: java
output.kafka: hosts: ["192.168.220.128:9093","192.168.223.128:9093"] topic: '%{[fields.log_topic]}' partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 ssl.certificate_authorities: ["/home/work/filebeat/keys/root-ca.pem"] ssl.certificate: "/home/work/filebeat/keys/kafka.crt.pem" ssl.key: "/home/work/filebeat/keys/kafka.key.pem"
input { kafka { bootstrap_servers => "10.100.27.199:9093,10.64.56.75:9093" group_id => "consumer-group-01" topics => ["topic_1"] consumer_threads => 5 decorate_events => false auto_offset_reset => "earliest" security_protocol => "SSL" ssl_keystore_password => "test" ssl_keystore_location => "/home/work/certificate/kafka-keystore.jks" ssl_keystore_password => "test" ssl_truststore_password => "test" ssl_truststore_location => "/home/work/cvca/certificate/truststore.jks" codec => json { charset => "UTF-8" } } }
那爲了向前兼容以前的filebeat日誌收集,咱們在input中同時保留beats配置,最終配置以下:
input { kafka { bootstrap_servers => "192.168.220.128:9093,192.168.223.128:9093" group_id => "consumer-group-01" topics => ["topic_1"] consumer_threads => 5 decorate_events => false auto_offset_reset => "earliest" security_protocol => "SSL" ssl_keystore_password => "test" ssl_keystore_location => "/home/work/certificate/kafka-keystore.jks" ssl_keystore_password => "test" ssl_truststore_password => "test" ssl_truststore_location => "/home/work/cvca/certificate/truststore.jks" codec => json { charset => "UTF-8" } } beats { port => 5044 client_inactivity_timeout => 600 ssl => true ssl_certificate_authorities => ["/home/work/certificate/chain-ca.pem"] ssl_certificate => "/home/work/certificate/server.crt.pem" ssl_key => "/home/work/certificate/server.key.pem" ssl_verify_mode => "force_peer" } }
須要特別注意的是,對於kafka的input來講,codec並非默認爲json的,致使以前用beats能成功解析到es的字段都沒法解析成功,因此務必加上codec的配置。
至此,改造升級的點應該沒有太大的坑了,也可以向前兼容,接入端自行切換便可。