CentOS 7環境下Kafka的集羣安裝和基本使用(多節點,分佈式環境)

CentOS 7環境下Kafka的集羣安裝和基本使用(多節點,分佈式環境)

卸載JDK

Centos7通常都會帶有本身的openjdk,咱們通常都回用oracle的jdk,因此要卸載java

刪除系統預裝jdk,能夠一條命令直接刪除:#rpm -e --nodeps `rpm -qa | grep java`node

經過 java -version查看是否已刪除linux

CentOS安裝JDK(不經過在線安裝,這樣能夠自定義把jdk裝在opt路徑下)

  • windows下下載JDK8 apache

  • 經過ssh上傳到centos,dos命令以下:pscp D:\jdk-8u201-linux-x64.tar.gz root@192.168.75.129:/home/heibaobootstrap

  • 解壓並將文件移動到/opt/java目錄下vim

    sudo tar -vxzf jdk-8u201-linux-x64.tar.gzwindows

    sudo mv jdk1.8.0_201 /opt/javacentos

  • 配置java環境變量api

    vim /etc/profileoracle

#在profile文件的最後面添加

    export JAVA_HOME=/opt/java

    export PATH=$JAVA_HOME/bin:$PATH

#環境變量當即生效

    source /etc/profile

#添加成功後經過java -version來查看jdk是否安裝成功

java -version

安裝多節點Zookeeper

  • 經過wget下載zookeeper到本機

    sudo wget https://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz

  • 解壓並將文件移動到/opt/zookeeper目錄下

    # 解壓zookeeper

    sudo tar -zxvf zookeeper-3.4.13.tar.gz

    # 將zookeeper移動到/opt/zookeeper目錄下

    sudo mv zookeeper-3.4.13 /opt/zookeeper

  • 編輯zookeeper的配置(本例子中是3個節點)

    # 建立存放日誌的文件夾(有幾個節點就建立幾個)

        sudo mkdir -p /opt/zookeeper/data_logs/zookeeper1

    # 複製一份zoo_sample.cfg文件並更名爲zoo1.cfg

        sudo cp /opt/zookeeper/conf/zoo_sample.cfg zoo1.cfg

    # 編輯zoo1.cfg 文件

        tickTime=2000

        dataDir=/opt/zookeeper/data_logs/zookeeper1

        clientPort=2181

        initLimit=5

        syncLimit=2

        server.1=192.168.75.129:2888:3888

        server.2=192.168.75.129:2889:3889

        server.3=192.168.75.129:2890:3890

  •     建立另外兩個節點配置文件zoo2.cfg,zoo3.cfg(節點數量自定義)

          內容分別以下():區別在datadir和clientPort

            tickTime=2000

            dataDir=/opt/zookeeper/data_logs/zookeeper2

            clientPort=2182

            initLimit=5

            syncLimit=2

            server.1=192.168.75.129:2888:3888

            server.2=192.168.75.129:2889:3889

            server.3=192.168.75.129:2890:3890

            ********

            tickTime=2000

            dataDir=/opt/zookeeper/data_logs/zookeeper3

            clientPort=2183

            initLimit=5

            syncLimit=2

            server.1=192.168.75.129:2888:3888

            server.2=192.168.75.129:2889:3889

            server.3=192.168.75.129:2890:3890


上面配置說明:

    tickTime心跳和超時時間,單位毫秒

    dataDir內存中保存系統快照的位置,生產環境中注意該文件夾的磁盤佔用狀況

    clientPortzookeeper監聽客戶端鏈接的端口(同一臺機器用不一樣端口,不一樣機器能夠用相同或者不一樣的端口),默認是2181

    initLimit指定follower節點初始時鏈接leader節點的最大tick次數(5*tickTime=10秒),不然被視爲超時

    syncLimifollwer節點與leader節點進行同步的最大時間(tick次數)

    server.X=host.port1:port2X必須是一個全局惟一的數字,且須要與myid文件中的數字相對應,host能夠是域名/機器名/IP,port1用於follower節點鏈接leader,port2用於leader選舉


  • 建立每一個節點的myid文件,myid文件位於配置文件(如zoo1.cfg)中的dataDir配置的目錄下,文件中只有一個數字X

        sudo vim /opt/zookeeper/data_logs/zookeeper1/myid


  • 配置zookeeper環境變量

  sudo vim /etc/profile

  #添加以下內容

    export ZOOKEEPER_HOME=/opt/zookeeper

    export PATH=$ZOOKEEPER_HOME/bin:$PATH

  #環境變量當即生效

    source /etc/profile

  • 啓動zookeeper

   java -cp zookeeper-3.4.13.jar:lib/slf4j-api-1.7.25.jar:lib/slf4j-log4j12-1.7.25.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo1.cfg

   #若是是不一樣的機器,能夠採用簡化的命令啓動服務:bin/zkServer.sh start conf/zoo1.cfg

  • 檢查一下集羣的狀態

bin/zkServer.sh status conf/zoo1.cfg

bin/zkServer.sh status conf/zoo2.cfg

bin/zkServer.sh status conf/zoo3.cfg

若是成功,從輸出信息中能夠看見各節點是leader仍是follower

安裝kafka集羣

  • 經過wget下載kafka到本機

sudo wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.1.1/kafka_2.12-2.1.1.tgz



  • 解壓並將文件移動到/opt/kafka目錄下

# 解壓

sudo tar -zxvf kafka_2.12-2.1.1.tgz

# 移動

sudo mv kafka_2.12-2.1.1 /opt/kafka

  • 編輯kafka集羣的配置

#建立日誌存放目錄(三個節點)

cd /opt/kafka

mkdir -p data_logs/kafka1

mkdir -p data_logs/kafka2

mkdir -p data_logs/kafka3

#複製三份配置文件

sudo cp config/server.properties server1.properties

sudo cp config/server.properties server2.properties

sudo cp config/server.properties server3.properties

#三份配置文件以下

broker.id=0

delete.topic.enable=true

listeners=PLAINTEXT://192.168.75.129:9092

log.dirs=/opt/kafka/data_logs/Kafka1

zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183

unclean.leader.election.enable=false

zookeeper.connection.timeout.ms=6000


broker.id=0

delete.topic.enable=true

listeners=PLAINTEXT://192.168.75.129:9093

log.dirs=/opt/kafka/data_logs/Kafka2

zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183

unclean.leader.election.enable=false

zookeeper.connection.timeout.ms=6000


broker.id=0

delete.topic.enable=true

listeners=PLAINTEXT://192.168.75.129:9094

log.dirs=/opt/kafka/data_logs/Kafka3

zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183

unclean.leader.election.enable=false

zookeeper.connection.timeout.ms=6000


  • 配置kafka環境變量

sudo vim /etc/profile

#添加以下內容:

export KAFKA_HOME=/opt/kafka

export PATH=$KAFKA_HOME/bin:$PATH


#環境變量當即生效

source /etc/profile

  • 在host中 註釋掉 127.0.0.1 的配置,要否則會出現zookeeper拒絕鏈接的狀況(具體緣由不清楚)

vim /etc/hosts

#第一行127.0.0.1註釋掉:

#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  • 啓動kafka,啓動後能夠在Kafkalogs目錄下的server.log文件中查看信息(路徑 logs/server.log)

bin/kafka-server-start.sh -daemon config/server1.properties

bin/kafka-server-start.sh -daemon config/server2.properties

bin/kafka-server-start.sh -daemon config/server3.properties

CentOS 7環境下Kafka集羣的驗證部署

  • 測試topic建立和刪除

bin/kafka-topics.sh --create -zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 --topic test-topic --partitions 3 --replication-factor 3

     驗證建立的topic

         bin/kafka-topics.sh --zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 -list

         bin/kafka-topics.sh --zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 -describe --topic test-topic

     刪除建立的topic

         bin/kafka-topics.sh --delete -zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 --topic test-topic


  • 測試消息發送與消費(利用自帶的腳本進行測試)

        (開一個終端做爲producer)bin/kafka-console-producer.sh --broker-list 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --topic test-topic

       (開一個終端做爲consumer) bin/kafka-console-consumer.sh --bootstrap-server 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --topic test-topic --from-beginning


  • 生產者吞吐量測試

        bin/kafka-producer-perf-test.sh --topic test-topic --num-records 5000 --record-size 200 --throughput -1 --producer-props bootstrap.servers=192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 acks=-1


  • 消費者吞吐量測試        

         bin/kafka-consumer-perf-test.sh --broker-list 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --messages 5000 --topic test-topic

相關文章
相關標籤/搜索