Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(九)安裝kafka_2.11-1.1.0

如何搭建配置centos虛擬機請參考《Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(一)VMW安裝四臺CentOS,並實現本機與它們能交互,虛擬機內部實現能夠上網。html

如何安裝hadoop2.9.0請參考《Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(二)安裝hadoop2.9.0java

如何配置hadoop2.9.0 HA 請參考《Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(十)安裝hadoop2.9.0搭建HAnode

如何安裝spark2.2.1請參考《Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(三)安裝spark2.2.1linux

如何安裝zookeeper-3.4.12請參考《Kafka:ZK+Kafka+Spark Streaming集羣環境搭建(八)安裝zookeeper-3.4.12express

常識科普:Kafka的存儲與安裝不依賴與hdfs/spark,從下邊安裝過程你能夠得知這個信息。apache

安裝kafka的服務器:bootstrap

192.168.0.120 master
192.168.0.121      slave1
192.168.0.122      slave2
192.168.0.123      slave3

備註:只在slave1,slave2,slave3三個節店上安裝zookeeper,master節店不安裝(其實前邊hadoop中master不做爲datanode節店,spark中master不做爲worker節店)。centos

下載解壓

官網上下載kafka,並上傳到slave1(192.168.0.121)的/opt目錄下。這裏kafka下載的是:kafka_2.11-1.1.0.tgz服務器

在slave1上解壓kafka_2.11-1.1.0.tgzsession

[root@slave1 opt]# tar -zxvf kafka_2.11-1.1.0.tgz

配置kafka

1)配置文件位置

路徑:/opt/kafka_2.11-1.1.0/config/server.properties

[root@slave1 config]# ls connect-console-sink.properties    connect-distributed.properties  connect-file-source.properties  connect-standalone.properties log4j.properties server.properties zookeeper.properties connect-console-source.properties  connect-file-sink.properties    connect-log4j.properties        consumer.properties            producer.properties  tools-log4j.properties [root@slave1 config]# 

2)server.properties默認配置

[root@slave1 config]# cd /opt/kafka_2.11-1.1.0/config [root@slave1 config]# ls connect-console-sink.properties    connect-distributed.properties  connect-file-source.properties  connect-standalone.properties log4j.properties server.properties zookeeper.properties connect-console-source.properties  connect-file-sink.properties    connect-log4j.properties        consumer.properties            producer.properties  tools-log4j.properties [root@slave1 config]# more server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0
# # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port
# EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092
 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files log.dirs=/tmp/kafka-logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion due to age log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=localhost:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0
View Code

3)在slave1上,修改server.properties後配置內容

# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0
# # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port
# EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092
 listeners=PLAINTEXT://:9092
port=9092 host.name=192.168.0.121 advertised.host.name=192.168.0.121 advertised.port=9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092
 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files #log.dirs=/tmp/kafka-logs log.dirs=/opt/kafka_2.11-1.1.0/logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion due to age log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0

配置的詳細說明請參考官方文檔:http://kafka.apache.org/documentation.html#brokerconfigs

注意:按照官方文檔的說法,advertised.host.name 和 advertised.port 這兩個參數用於定義集羣向 Producer 和 Consumer 廣播的節點 host 和 port,若是不定義,會默認使用 host.name 和 port 的定義。但在實際應用中,發現若是不定義 advertised.host.name 參數,使用 Java 客戶端從遠端鏈接集羣時,會發生鏈接超時,拋出異常:org.apache.kafka.common.errors.TimeoutException: Batch Expired

通過過 debug 發現,鏈接到集羣是成功的,但鏈接到集羣后更新回來的集羣 meta 信息倒是錯誤的。metadata 中的 Cluster 信息中節點的 hostname 是一串字符,而不是實際的ip地址。這串實際上是遠端主機的 hostname,這說明在沒有配置 advertised.host.name 的狀況下,Kafka 並無像官方文檔宣稱的那樣改成廣播咱們配置的 host.name,而是廣播了主機配置的 hostname 。遠端的客戶端並無配置 hosts,因此天然是鏈接不上這個 hostname 的。要解決這一問題,把 host.name 和 advertised.host.name 都配置成絕對的 ip 地址就能夠了。

將配置後的kafka文件拷貝到slave2,slave3服務器上,並修改server.properties配置文件

1)將配置後的kafka文件拷貝到slave2,slave3服務器上

在slave1上執行如下命令,將拷貝kafka文件到slave2,slave3節點

在執行拷貝以前,須要在slave2,slave3上新建文件/opt/kafka_2.11-1.1.0目錄,以slave3執行爲例:

[spark@slave3 ~]$ su root Password: [root@slave3 spark]# mkdir /opt/kafka_2.11-1.1.0 [root@slave3 spark]# ls [root@slave3 spark]# cd /opt/ hadoop-2.9.0  jdk1.8.0_171  jdk-8u171-linux-x64.tar.gz  kafka_2.11-1.1.0  scala-2.11.0  scala-2.11.0.tgz  spark-2.2.1-bin-hadoop2.7 [root@slave3 opt]# chmod 777 /opt/kafka_2.11-1.1.0 [root@slave3 opt]# 

在slave1執行拷貝:

scp -r /opt/kafka_2.11-1.1.0 spark@slave2:/opt/
scp -r /opt/kafka_2.11-1.1.0 spark@slave3:/opt/

2)並修改server.properties配置文件

修改1:slave2,slave3上的/opt/kafka_2.11-1.1.0/config/server.properties

主要修改:

host.name=192.168.0.121 advertised.host.name=192.168.0.121

確保ip修改成本身的ip。

修改2:slave2,slave3上/opt/kafka_2.11-1.1.0/config/server.properties的broker.id配置項,使得slave2的broker.id=1,slave3的broker.id=2。不然會出現下邊的錯誤broker.id重複拋出異常,致使啓動kafka失敗。

在slave1,slave2,slave3主機上分別啓動 Kafka 服務

cd /opt/kafka_2.11-1.1.0/ bin/kafka-server-start.sh -daemon config/server.properties

官方給出的啓動方法是:

bin/kafka-server-start.sh config/server.properties &

1)啓動失敗:此時啓動slave2,slave3啓動一會後,自動殺掉kafka進程,從/opt/kafka_2.11-1.1.0/logs/server.log日誌中查找到拋出了異常:

[2018-07-01 07:10:45,198] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [2018-07-01 07:10:45,204] ERROR Error while creating ephemeral at /brokers/ids/0, node already exists and owner '144115199316656129' does not match current session '72057669184061443' (kafka.zk.KafkaZkCli ent$CheckedEphemeral) [2018-07-01 07:10:45,204] INFO Result of znode creation at /brokers/ids/0 is: NODEEXISTS (kafka.zk.KafkaZkClient) [2018-07-01 07:10:45,208] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1476) at kafka.zk.KafkaZkClient.registerBrokerInZk(KafkaZkClient.scala:84) at kafka.server.KafkaServer.startup(KafkaServer.scala:254) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:92) at kafka.Kafka.main(Kafka.scala) [2018-07-01 07:10:45,209] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)

錯誤緣由:server.properties文件中的broker.id的值,在集羣環境下重複了,即,一個kafka的集羣環境下,broker.id的值是不能重複的,必須惟一。就算kafka服務在不一樣機器上

解決方案:修改slave2,slave3上/opt/kafka_2.11-1.1.0/config/server.properties的broker.id配置項,使得slave2的broker.id=1,slave3的broker.id=2。

以slave1啓動爲例:

[root@slave1 kafka_2.11-1.1.0]# cd /opt/kafka_2.11-1.1.0/ [root@slave1 kafka_2.11-1.1.0]# bin/kafka-server-start.sh -daemon config/server.properties [root@slave1 kafka_2.11-1.1.0]# jps 1347 QuorumPeerMain 2493 Jps 2431 Kafka

經常使用命令

建立分區和 topic

1)在slave1(192.168.0.121)上建立一個名爲 my-topic,擁有兩個分區,兩個副本的Topic

cd /opt/kafka_2.11-1.1.0/ bin/kafka-topics.sh --create --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --replication-factor 2 --partitions 2 --topic my-topic

返回信息:

[root@slave1 kafka_2.11-1.1.0]# cd /opt/kafka_2.11-1.1.0/ [root@slave1 kafka_2.11-1.1.0]# bin/kafka-topics.sh --create --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --replication-factor 2 --partitions 2 --topic my-topic Created topic "my-topic". [root@slave1 kafka_2.11-1.1.0]#

2)驗證:同一個名稱的topic,在一個kafka的集羣環境下,不能重複建立。

在slave1(192.168.0.121)上建立一個名爲 my-topic,擁有兩個分區,兩個副本的Topic

[root@slave1 kafka_2.11-1.1.0]# cd /opt/kafka_2.11-1.1.0/ [root@slave1 kafka_2.11-1.1.0]# bin/kafka-topics.sh --create --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --replication-factor 2 --partitions 2 --topic my-topic Error while executing topic command : Topic 'my-topic' already exists. [2018-07-01 07:31:23,274] ERROR org.apache.kafka.common.errors.TopicExistsException: Topic 'my-topic' already exists. (kafka.admin.TopicCommand$) [root@slave1 kafka_2.11-1.1.0]#

在salve2(192.168.0.122)上建立一個名爲 my-topic,擁有兩個分區,兩個副本的Topic

[root@slave2 kafka_2.11-1.1.0]# cd /opt/kafka_2.11-1.1.0/ [root@slave2 kafka_2.11-1.1.0]# bin/kafka-topics.sh --create --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --replication-factor 2 --partitions 2 --topic my-topic Error while executing topic command : Topic 'my-topic' already exists. [2018-07-01 07:32:08,099] ERROR org.apache.kafka.common.errors.TopicExistsException: Topic 'my-topic' already exists. (kafka.admin.TopicCommand$) [root@slave2 kafka_2.11-1.1.0]#

3)查看 Topic 狀態

[root@slave2 kafka_2.11-1.1.0]# cd /opt/kafka_2.11-1.1.0/ [root@slave2 kafka_2.11-1.1.0]# bin/kafka-topics.sh --describe --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --topic my-topic Topic:my-topic  PartitionCount:2        ReplicationFactor:2 Configs: Topic: my-topic Partition: 0    Leader: 2       Replicas: 2,0   Isr: 2,0 Topic: my-topic Partition: 1    Leader: 0       Replicas: 0,2   Isr: 0,2

4)查看當前kafka包含的topics列表

[spark@slave1 kafka_2.11-1.1.0]$ cd /opt/kafka_2.11-1.1.0/ [spark@slave1 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --list --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 my-topic t-my t-order

5)刪除某個topic

[spark@slave1 kafka_2.11-1.1.0]$ cd /opt/kafka_2.11-1.1.0/ [spark@slave1 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --delete --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --topic my-topic Topic my-topic is marked for deletion. Note: This will have no impact if delete.topic.enable is not set to true. [spark@slave1 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --list --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 t-my t-order [spark@slave1 kafka_2.11-1.1.0]$

此時,Kafka 集羣的搭建已成功完成!

其它經常使用命令:

查看指定topic信息

bin/kafka-topics.sh --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --describe --topic t-my

控制檯向topic生產數據

bin/kafka-console-producer.sh --broker-list 192.168.0.121:9092,192.168.0.122:9092,192.168.0.123:9092 --topic t-my

控制檯消費topic的數據(從頭開始消費)

bin/kafka-console-consumer.sh --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --topic t-my --from-beginning

控制檯消費topic的數據(從頭開始消費,最多消費多少)

bin/kafka-console-consumer.sh --zookeeper 192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181 --topic t-my --from-beginning --max-messages 3

查看topic某分區偏移量最大(小)值

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --topic t-my --time -1 --broker-list 192.168.0.121:9092,192.168.0.122:9092,192.168.0.123:9092 --partitions 0

注: time爲-1時表示最大值,time爲-2時表示最小值

消費時對topic中的打印key,value,對key,value進行deserializer:

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \ --topic streams-wordcount-output \ --from-beginning \ --formatter kafka.tools.DefaultMessageFormatter \ --property print.key=true \ --property print.value=true \ --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \ --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

備註:若是不設置參數「--from-begining」就是從最新消費,若是設置參數「--from-begining」就是從最先消費。

查看Kafka上某個topic的位移狀況:

--查看topic消費進度

kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.0.120:9092,192.168.0.121:9092,192.168.0.122:9092 --topic s1mmetest --time -1

-1表示查詢test各個分區當前最大的消息位移值(注意,這裏的位移不僅是consumer端的位移,而是指消息在每一個分區的位置) 

若是你要查詢曾經生產過的最大消息數,那麼只運行上面這條命令而後把各個分區的結果相加就能夠了。但若是你須要查詢當前集羣中該topic的消息數,那麼還須要運行下面這條命令:

kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.0.120:9092,192.168.0.121:9092,192.168.0.122:9092 --topic s1mmetest --time -2

-2表示去獲取當前各個分區的最小位移。以後把運行第一條命令的結果與剛剛獲取的位移之和相減就是集羣中該topic的當前消息總數。

 

參考《https://www.cnblogs.com/RUReady/p/6479464.html》

相關文章
相關標籤/搜索