主要參考了https://stackoverflow.com/questions/44651219/kafka-deployment-on-minikube和https://github.com/ramhiser/kafka-kubernetes兩個項目,可是這兩個項目都是單節點的Kafka,我這裏嘗試將單節點Kafka擴展爲多節點的Kafka集羣。node
1、單節點Kafkalinux
要搭建Kafka集羣,仍是要從單節點開始。git
1.建立Zookeeper服務zookeeper-svc.yaml和zookeeper-deployment.yaml,用kubectl create -f建立:github
apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: ports: - name: zookeeper-port port: 2181 targetPort: 2181 selector: app: zookeeper
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181
2.等pod跑起來,service的endpoint配置成功後,就能夠繼續建立kafka的kafka-svc.yaml和kafka-deployment.yaml了:bootstrap
apiVersion: v1 kind: Service metadata: name: kafka-service labels: app: kafka spec: type: NodePort ports: - port: 9092 name: kafka-port targetPort: 9092 nodePort: 30092 protocol: TCP selector: app: kafka
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-deployment spec: replicas: 1 selector: matchLabels: name: kafka template: metadata: labels: name: kafka app: kafka spec: containers: - name: kafka image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: "[kafka的service的clusterIP]" - name: KAFKA_ZOOKEEPER_CONNECT value: [zookeeper的service的clusterIP]:2181 - name: KAFKA_BROKER_ID value: "1"
clusterIP經過kubectl get svc進行查看。KAFKA_ZOOKEEPER_CONNECT的值也能夠改成zookeeper-service:2181。api
3.建立後,須要對服務進行測試。參考了http://www.javashuo.com/article/p-pyycgahe-b.html的方法。bash
在此以前,針對虛擬化的Kafka,須要先執行下面的命令以進入容器:app
kubectl exec -it [Kafka的pod名稱] /bin/bash
進入容器後,Kafka的命令存儲在opt/kafka/bin目錄下,用cd命令進入:測試
cd opt/kafka/bin
後面的操做就跟上面的博客中寫的相似了。針對單節點Kafka,須要將同一個節點做爲生產者和消費者。執行命令以下:.net
kafka-console-producer.sh --broker-list [kafka的service的clusterIP]:9092 --topic test
運行正常的話,下方會出現>標記以提示輸入消息。這樣這個終端就成爲了生產者。
另外打開一個linux終端,執行相同的命令進入容器。此次將這個終端做爲消費者。注意,上面的博客中寫的建立消費者的方法在新版的Kafka中已經改變,須要執行下面的命令:
kafka-console-consumer.sh --bootstrap-server [kafka的service的clusterIP]:9092 --topic test --from-beginning
以後,在生產者輸入信息,查看消費者是否可以接收到。若是接收到,說明運行成功。
最後,還能夠執行下面的命令以測試列出全部的消息主題:
kafka-topics.sh --list --zookeeper [zookeeper的service的clusterIP]:2181
注意,有時須要用Kafka的端口,有時須要用Zookeeper的端口,應注意區分。
2、多節點Kafka集羣
單節點服務運行成功後,就能夠嘗試增長Kafka的節點以創建集羣。個人Kubernetes集羣包含3個節點,因此我搭建的Kafka集羣也包含3個節點,分別運行在三臺機器上。
我這裏採用了3個Deployment來運行Kafka和Zookeeper,其實更優雅的方式是使用StatefulSet。Kubernetes的官方文檔上有使用StatefulSet搭建Zookeeper集羣的範例。
可是使用StatefulSet搭建Zookeeper和Kafka時,Zookeeper的myid和Kafka的brokerID就不能預先設置了,所以須要在鏡像構建過程當中加入相關的操做,而Docker Hub中的絕大多數鏡像都不包含這一邏輯。而Deployment雖然不夠優雅,可是能夠對各節點預先配置,運行起來相對簡單,能夠說各有所長。
1.搭建Zookeeper集羣
建立zookeeper的yaml文件zookeeper-svc2.yaml和zookeeper-deployment2.yaml以下:
apiVersion: v1 kind: Service metadata: name: zoo1 labels: app: zookeeper-1 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-1 --- apiVersion: v1 kind: Service metadata: name: zoo2 labels: app: zookeeper-2 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-2 --- apiVersion: v1 kind: Service metadata: name: zoo3 labels: app: zookeeper-3 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-3
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: zookeeper-deployment-1 spec: replicas: 1 selector: matchLabels: app: zookeeper-1 name: zookeeper-1 template: metadata: labels: app: zookeeper-1 name: zookeeper-1 spec: containers: - name: zoo1 image: digitalwonderland/zookeeper imagePullPolicy: IfNotPresent ports: - containerPort: 2181 env: - name: ZOOKEEPER_ID value: "1" - name: ZOOKEEPER_SERVER_1 value: zoo1 - name: ZOOKEEPER_SERVER_2 value: zoo2 - name: ZOOKEEPER_SERVER_3 value: zoo3 --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: zookeeper-deployment-2 spec: replicas: 1 selector: matchLabels: app: zookeeper-2 name: zookeeper-2 template: metadata: labels: app: zookeeper-2 name: zookeeper-2 spec: containers: - name: zoo2 image: digitalwonderland/zookeeper imagePullPolicy: IfNotPresent ports: - containerPort: 2181 env: - name: ZOOKEEPER_ID value: "2" - name: ZOOKEEPER_SERVER_1 value: zoo1 - name: ZOOKEEPER_SERVER_2 value: zoo2 - name: ZOOKEEPER_SERVER_3 value: zoo3 --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: zookeeper-deployment-3 spec: replicas: 1 selector: matchLabels: app: zookeeper-3 name: zookeeper-3 template: metadata: labels: app: zookeeper-3 name: zookeeper-3 spec: containers: - name: zoo3 image: digitalwonderland/zookeeper imagePullPolicy: IfNotPresent ports: - containerPort: 2181 env: - name: ZOOKEEPER_ID value: "3" - name: ZOOKEEPER_SERVER_1 value: zoo1 - name: ZOOKEEPER_SERVER_2 value: zoo2 - name: ZOOKEEPER_SERVER_3 value: zoo3
這裏建立了3個deployment和3個service,一一對應。這樣,三個實例均可以對外提供服務。
建立完成後,須要用kubectl logs查看一下三個Zookeeper的pod的日誌,確保沒有錯誤發生,而且在3個節點的日誌中,有相似下面的語句,則代表Zookeeper集羣已順利搭建成功。
2016-10-06 14:04:05,904 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@358] - LEADING -
LEADER ELECTION TOOK - 2613
2.搭建Kafka集羣
一樣建立3個deployment和3個service,編寫kafka-svc2.yaml和kafka-deployment2.yaml以下:
apiVersion: v1 kind: Service metadata: name: kafka-service-1 labels: app: kafka-service-1 spec: type: NodePort ports: - port: 9092 name: kafka-service-1 targetPort: 9092 nodePort: 30901 protocol: TCP selector: app: kafka-service-1 --- apiVersion: v1 kind: Service metadata: name: kafka-service-2 labels: app: kafka-service-2 spec: type: NodePort ports: - port: 9092 name: kafka-service-2 targetPort: 9092 nodePort: 30902 protocol: TCP selector: app: kafka-service-2 --- apiVersion: v1 kind: Service metadata: name: kafka-service-3 labels: app: kafka-service-3 spec: type: NodePort ports: - port: 9092 name: kafka-service-3 targetPort: 9092 nodePort: 30903 protocol: TCP selector: app: kafka-service-3
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-deployment-1 spec: replicas: 1 selector: matchLabels: name: kafka-service-1 template: metadata: labels: name: kafka-service-1 app: kafka-service-1 spec: containers: - name: kafka-1 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: [kafka-service1的clusterIP] - name: KAFKA_ZOOKEEPER_CONNECT value: zoo1:2181,zoo2:2181,zoo3:2181 - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CREATE_TOPICS value: mytopic:2:1 --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-deployment-2 spec: replicas: 1 selector: selector: matchLabels: name: kafka-service-2 template: metadata: labels: name: kafka-service-2 app: kafka-service-2 spec: containers: - name: kafka-2 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: [kafka-service2的clusterIP] - name: KAFKA_ZOOKEEPER_CONNECT value: zoo1:2181,zoo2:2181,zoo3:2181 - name: KAFKA_BROKER_ID value: "2" --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-deployment-3 spec: replicas: 1 selector: selector: matchLabels: name: kafka-service-3 template: metadata: labels: name: kafka-service-3 app: kafka-service-3 spec: containers: - name: kafka-3 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: [kafka-service3的clusterIP] - name: KAFKA_ZOOKEEPER_CONNECT value: zoo1:2181,zoo2:2181,zoo3:2181 - name: KAFKA_BROKER_ID value: "3"
在deployment1中執行了建立一個新topic的操做。
3.測試
測試方法基本同單集羣的狀況,這裏就不贅述了。不一樣的是,此次能夠將不一樣的節點做爲生產者和消費者。
至此,Kubernetes的Kafka集羣搭建就大功告成了!