1、啓動kafka,使用默認配置 apache
①啓動zookeeper,bin/-server-start.sh -daemon config/server.properties ide
②啓動kafka:bin/kafka-server-start.sh -daemon config/server.properties
③建立主題:bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic spider
④啓動消費者:bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic spider --from-beginning 測試
2、配置flume並啓動 ui
flume的詳細配置能夠看我前面的關於flume的詳細配置 spa
將flume(1.6.0,自定義sink會在後面博客補充)的sink改成: server
agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.log-sink1.topic = spider
agent1.sinks.log-sink1.brokerList = 192.168.57.133:9092
agent1.sinks.log-sink1.requiredAcks = 1
agent1.sinks.log-sink1.batchSize = 1
啓動flume kafka
3、測試 博客
這裏的測試類仍是使用上一次log4j整合flume的例子,運行測試類,若是看到下面輸出,則表示整合成功 it