因爲kafka強依賴於zookeeper,因此需先搭建好zookeeper集羣。因爲zookeeper是由java編寫的,需運行在jvm上,因此首先應具有java環境。
(ps:默認您的centos系統可聯網,本教程就不教配置ip什麼的了)
(ps2:沒有wget的先裝一下:yum install wget)
(ps3:人啊,就是要條理。東邊放一點,西邊放一點,過段時間就不知道本身裝在哪裏了。本教程全部下載均放在/usr/local目錄下)
(ps4:kafka可能有內置zookeeper,感受能夠越過zookeeper教程,可是這裏也配置出來了。我沒試過)html
文章首發公衆號:Java架構師聯盟java
由於oracle 公司不容許直接經過wget 下載官網上的jdk包。因此你直接wget如下地址下載下來的是一個只有5k的網頁文件而已,並非須要的jdk包。(壟斷地位就是任性)。
(請經過java -version判斷是否自帶jdk,個人沒帶)node
下面是jdk8的官方下載地址:linux
https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8u211-later-5573849.html
這裏經過xftp上傳到服務器指定位置:/usr/localweb
對壓縮文件進行解壓:redis
tar -zxvf jdk-8u221-linux-x64.tar.gz
對解壓後的文件夾進行更名:spring
mv jdk1.8.0_221 jdk1.8
vim /etc/profile #java environment export JAVA_HOME=/usr/local/jdk1.8 export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar export PATH=$PATH:${JAVA_HOME}/bin
操做以後的界面以下:shell
運行命令使環境生效數據庫
source /etc/profile
建立zookeeper目錄,在該目錄下進行下載:apache
mkdir /usr/local/zookeeper
這一步若是出現鏈接被拒絕時可多試幾回,我就是第二次請求才成功的。
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
等待下載完成以後解壓:
tar -zxvf zookeeper-3.4.6.tar.gz
重命名爲zookeeper1
mv zookeeper-3.4.6 zookeeper1 cp -r zookeeper1 zookeeper2 cp -r zookeeper1 zookeeper3
在zookeeper1目錄下建立
在data目錄下新建myid文件。內容爲1
cd /usr/local/zookeeper/zookeeper1/conf/ cp zoo_sample.cfg zoo.cfg
進行過上面兩步以後,有zoo.cfg文件了,如今修改內容爲:
dataDir=/usr/local/zookeeper/zookeeper1/data dataLogDir=/usr/local/zookeeper/zookeeper1/logs server.1=192.168.233.11:2888:3888 server.2=192.168.233.11:2889:3889 server.3=192.168.233.11:2890:3890
首先,複製更名。
cd /usr/local/zookeeper/ cp -r zookeeper1 zookeeper2
而後修改具體的某些配置:
vim zookeeper2/conf/zoo.cfg
將下圖三個地方1改爲2
vim zookeeper2/data/myid
同時將myid中的值改爲2
同上,複製更名
cp -r zookeeper1 zookeeper3
vim zookeeper3/conf/zoo.cfg
修改成3
vim zookeeper3/data/myid
修改成3
cd /usr/local/zookeeper/zookeeper1/bin/
因爲啓動所需代碼比較多,這裏簡單寫了一個啓動腳本:
vim start
start的內容以下
cd /usr/local/zookeeper/zookeeper1/bin/ ./zkServer.sh start ../conf/zoo.cfg cd /usr/local/zookeeper/zookeeper2/bin/ ./zkServer.sh start ../conf/zoo.cfg cd /usr/local/zookeeper/zookeeper3/bin/ ./zkServer.sh start ../conf/zoo.cfg
下面是鏈接腳本:
vim login
login內容以下:
./zkCli.sh -server 192.168.233.11:2181,192.168.233.11:2182,192.168.233.11:2183
腳本編寫完成,接下來啓動:
sh start sh login
啓動集羣成功,以下圖:
這裏zookeeper就告一段落了,因爲zookeeper佔用着輸入窗口,這裏能夠在xshell右鍵標籤,新建ssh渠道。而後就能夠在新窗口繼續操做kafka了!
首先建立kafka目錄:
mkdir /usr/local/kafka
而後在該目錄下載
cd /usr/local/kafka/ wget https://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz
下載成功以後解壓:
tar -zxvf kafka_2.11-1.1.0.tgz
首先進入conf目錄下:
cd /usr/local/kafka/kafka_2.11-1.1.0/config
修改server.properties
修改內容:
broker.id=0 log.dirs=/tmp/kafka-logs listeners=PLAINTEXT://192.168.233.11:9092
複製兩份server.properties
cp server.properties server2.properties cp server.properties server3.properties
修改server2.properties
vim server2.properties
修改主要內容爲:
broker.id=1 log.dirs=/tmp/kafka-logs1 listeners=PLAINTEXT://192.168.233.11:9093
如上,修改server3.properties
修改內容爲:
broker.id=2 log.dirs=/tmp/kafka-logs2 listeners=PLAINTEXT://192.168.233.11:9094
這裏仍是在bin目錄編寫一個腳本:
cd ../bin/ vim start
腳本內容爲:
./kafka-server-start.sh ../config/server.properties & ./kafka-server-start.sh ../config/server2.properties & ./kafka-server-start.sh ../config/server3.properties &
經過jps命令能夠查看到,共啓動了3個kafka。
cd /usr/local/kafka/kafka_2.11-1.1.0 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
kafka打印了幾條日誌
在啓動的zookeeper中能夠經過命令查詢到這條topic!
ls /brokers/topics
查看kafka狀態
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
能夠看到此時有三個節點 1 , 2 , 0
Leader 是1 ,
由於分區只有一個 因此在0上面,
Replicas:主從備份是 1,2,0,
ISR(in-sync):如今存活的信息也是 1,2,0
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
因爲不能按刪除,不能按左右鍵去調整,因此語句有些亂啊。em…
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
能夠看出,啓動消費者以後就會自動消費。
在生產者又造了一條。
消費者自動捕獲成功!
先貼一張kafka兼容性目錄:
不知足的話啓動springboot的時候會拋異常的!!!ps:該走的岔路我都走了o(╥﹏╥)o
(個人kafka-clients是1.1.0,spring-kafka是2.2.2,中間那列暫時不用管)
迴歸正題,搞了兩個小時,終於搞好了,想哭…
遇到的問題基本就是jar版本不匹配。
上面的步驟我也都會相應的去修改,爭取你們按照本教程一遍過!!!
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.gzky</groupId> <artifactId>study</artifactId> <version>0.0.1-SNAPSHOT</version> <name>study</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-redis</artifactId> <version>1.3.8.RELEASE</version> </dependency> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.2.0.RELEASE</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
pom文件中,重點是下面這兩個版本。
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.2.0.RELEASE</version> </dependency>
spring: redis: cluster: #設置key的生存時間,當key過時時,它會被自動刪除; expire-seconds: 120 #設置命令的執行時間,若是超過這個時間,則報錯; command-timeout: 5000 #設置redis集羣的節點信息,其中namenode爲域名解析,經過解析域名來獲取相應的地址; nodes: 192.168.233.11:9001,192.168.233.11:9002,192.168.233.11:9003,192.168.233.11:9004,192.168.233.11:9005,192.168.233.11:9006 kafka: # 指定kafka 代理地址,能夠多個 bootstrap-servers: 192.168.233.11:9092,192.168.233.11:9093,192.168.233.11:9094 producer: retries: 0 # 每次批量發送消息的數量 batch-size: 16384 buffer-memory: 33554432 # 指定消息key和消息體的編解碼方式 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer consumer: # 指定默認消費者group id group-id: test-group auto-offset-reset: earliest enable-auto-commit: true auto-commit-interval: 100 # 指定消息key和消息體的編解碼方式 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer server: port: 8085 servlet: #context-path: /redis context-path: /kafka
沒有配置Redis的能夠把Redis部分刪掉,也就是下圖:
想學習配置Redis集羣的能夠參考:《Redis集羣redis-cluster的搭建及集成springboot》
package com.gzky.study.utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Component; /** * kafka生產者工具類 * * @author biws * @date 2019/12/17 **/ @Component public class KfkaProducer { private static Logger logger = LoggerFactory.getLogger(KfkaProducer.class); @Autowired private KafkaTemplate<String, String> kafkaTemplate; /** * 生產數據 * @param str 具體數據 */ public void send(String str) { logger.info("生產數據:" + str); kafkaTemplate.send("testTopic", str); } }
package com.gzky.study.utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Component; /** * kafka消費者監聽消息 * * @author biws * @date 2019/12/17 **/ @Component public class KafkaConsumerListener { private static Logger logger = LoggerFactory.getLogger(KafkaConsumerListener.class); @KafkaListener(topics = "testTopic") public void onMessage(String str){ //insert(str);//這裏爲插入數據庫代碼 logger.info("監聽到:" + str); System.out.println("監聽到:" + str); } }
package com.gzky.study.controller; import com.gzky.study.utils.KfkaProducer; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; /** * kafka對外接口 * * @author biws * @date 2019/12/17 **/ @RestController public class KafkaController { @Autowired KfkaProducer kfkaProducer; /** * 生產消息 * @param str * @return */ @RequestMapping(value = "/sendKafkaWithTestTopic",method = RequestMethod.GET) @ResponseBody public boolean sendTopic(@RequestParam String str){ kfkaProducer.send(str); return true; } }
這裏首先應該在服務器啓動監聽器(kafka根目錄),下面命令必須是具體的服務器ip,不能是localhost,是我踩過的坑:
推薦此處重啓一下集羣
關閉kafka命令:
cd /usr/local/kafka/kafka_2.11-1.1.0/bin ./kafka-server-stop.sh ../config/server.properties & ./kafka-server-stop.sh ../config/server2.properties & ./kafka-server-stop.sh ../config/server3.properties &
此處應該jps看一下,等待全部的kafka都關閉(關不掉的kill掉),再從新啓動kafka:
./kafka-server-start.sh ../config/server.properties & ./kafka-server-start.sh ../config/server2.properties & ./kafka-server-start.sh ../config/server3.properties &
等待kafka啓動成功後,啓動消費者監聽端口:
cd /usr/local/kafka/kafka_2.11-1.1.0 bin/kafka-console-consumer.sh --bootstrap-server 192.168.233.11:9092 --from-beginning --topic testTopic
曾經我亂輸的測試信息所有被監聽過來了!
啓動springboot服務
而後用postman生產消息:
而後享受成果,服務器端監聽成功。
項目中也監聽成功!