Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0集羣與簡單測試

1、下載

下載地址:html

http://kafka.apache.org/downloads.html    我這裏下載的是Scala 2.11對應的 kafka_2.11-1.1.0.tgzjava

2、kafka安裝

集羣規劃

IP 節點名稱 Kafka Zookeeper Jdk Scala
192.168.100.21 node21 Kafka Zookeeper Jdk Scala
192.168.100.22 node22 Kafka Zookeeper Jdk Scala
192.168.100.23 node23 Kafka Zookeeper Jdk Scala

Zookeeper集羣安裝參考: CentOS7.5搭建Zookeeper3.4.12集羣與命令行操做node

2.1 上傳解壓縮

[admin@node21 software]$ tar zxvf kafka_2.11-1.1.0.tgz -C /opt/module/

2.2 建立日誌目錄

[admin@node21 software]$ cd /opt/module/kafka_2.11-1.1.0
[admin@node21 kafka_2.11-1.1.0]$ mkdir logs

2.3 修改配置文件

進入kafka的安裝配置目錄express

[admin@node21 kafka_2.11-1.1.0]$ cd config/
[admin@node21 config]$ vi server.properties

主要關注:server.properties 這個文件便可,發如今配置目錄下也有Zookeeper文件,咱們能夠根據Kafka內置的zk集羣來啓動,可是建議使用獨立的zk集羣。apache

server.propertiesbroker.id和listeners每一個節點都不相同bootstrap

#是否容許刪除topic,默認false不能手動刪除
delete.topic.enable=true #當前機器在集羣中的惟一標識,和zookeeper的myid性質同樣 broker.id=0 #當前kafka服務偵聽的地址和端口,端口默認是9092 listeners = PLAINTEXT://192.168.100.21:9092 #這個是borker進行網絡處理的線程數 num.network.threads=3 #這個是borker進行I/O處理的線程數 num.io.threads=8 #發送緩衝區buffer大小,數據不是一會兒就發送的,先會存儲到緩衝區到達必定的大小後在發送,能提升性能 socket.send.buffer.bytes=102400 #kafka接收緩衝區大小,當數據到達必定大小後在序列化到磁盤 socket.receive.buffer.bytes=102400 #這個參數是向kafka請求消息或者向kafka發送消息的請請求的最大數,這個值不能超過java的堆棧大小 socket.request.max.bytes=104857600 #消息日誌存放的路徑 log.dirs=/opt/module/kafka_2.11-1.1.0/logs #默認的分區數,一個topic默認1個分區數 num.partitions=1 #每一個數據目錄用來日誌恢復的線程數目 num.recovery.threads.per.data.dir=1 #默認消息的最大持久化時間,168小時,7天 log.retention.hours=168 #這個參數是:由於kafka的消息是以追加的形式落地到文件,當超過這個值的時候,kafka會新起一個文件 log.segment.bytes=1073741824 #每隔300000毫秒去檢查上面配置的log失效時間 log.retention.check.interval.ms=300000 #是否啓用log壓縮,通常不用啓用,啓用的話能夠提升性能 log.cleaner.enable=false #設置zookeeper的鏈接端口 zookeeper.connect=node21:2181,node22:2181,node23:2181 #設置zookeeper的鏈接超時時間 zookeeper.connection.timeout.ms=6000

2.4分發安裝包到其餘節點

[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node22:/opt/module/
[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node23:/opt/module/

修改node22,node23節點kafka配置文件conf/server.properties裏面的broker.id和listeners的值。緩存

2.5 添加環境變量

[admin@node21 module]$ vi /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka_2.11-1.1.0
export PATH=$PATH:$KAFKA_HOME/bin

保存使其當即生效bash

[admin@node21 module]$  source /etc/profile

3、kafka集羣啓動

3.1 首先啓動zookeeper集羣

全部zookeeper節點都須要執行服務器

[admin@node21 ~]$ zkServer.sh start

3.2 後臺啓動Kafka集羣服務

全部Kafka節點都須要執行網絡

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-start.sh config/server.properties &

4、kafka命令行操做

kafka-broker-list:node21:9092,node22:9092,node23:9092

zookeeper.connect-list: node21:2181,node22:2181,node23:2181

4.1 建立新topic 

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh
Create, delete, describe, or change a topic.
Option                                   Description                            
------                                   -----------                            
--alter Alter the number of partitions, replica assignment, and/or configuration for the topic. --config <String: name=value> A topic configuration override for the topic being created or altered.The following is a list of valid configurations: cleanup.policy compression.type delete.retention.ms file.delete.delay.ms flush.messages flush.ms follower.replication.throttled. replicas index.interval.bytes leader.replication.throttled.replicas max.message.bytes message.format.version message.timestamp.difference.max.ms message.timestamp.type min.cleanable.dirty.ratio min.compaction.lag.ms min.insync.replicas preallocate retention.bytes retention.ms segment.bytes segment.index.bytes segment.jitter.ms segment.ms unclean.leader.election.enable See the Kafka documentation for full details on the topic configs. --create  Create a new topic. --delete  Delete a topic --delete-config <String: name> A topic configuration override to be removed for an existing topic (see the list of configurations under the --config option). --describe List details for the given topics. --disable-rack-aware Disable rack aware replica assignment --force Suppress console prompts --help  Print usage information. --if-exists if set when altering or deleting topics, the action will only execute if the topic exists --if-not-exists if set when creating topics, the action will only execute if the topic does not already exist --list  List all available topics. --partitions <Integer: # of partitions> The number of partitions for the topic being created or altered (WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected --replica-assignment <String: A list of manual partition-to-broker broker_id_for_part1_replica1 : assignments for the topic being broker_id_for_part1_replica2 , created or altered. broker_id_for_part2_replica1 : broker_id_for_part2_replica2 , ...> --replication-factor <Integer: The replication factor for each replication factor> partition in the topic being created. --topic <String: topic> The topic to be create, alter or describe. Can also accept a regular expression except for --create option --topics-with-overrides if set when describing topics, only show topics that have overridden configs --unavailable-partitions if set when describing topics, only show partitions whose leader is not available --under-replicated-partitions if set when describing topics, only show under replicated partitions --zookeeper <String: hosts> REQUIRED: The connection string for the zookeeper connection in the form host:port. Multiple hosts can be given to allow fail-over. 

在node21節點上建立一個新的Topic

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --create --zookeeper node21:2181,node22:2181,node23:2181 --replication-factor 3 --partitions 3 --topic TestTopic

選項說明:

--topic 定義topic名

--replication-factor  定義副本數

--partitions  定義分區數

4.2 查看topic副本信息

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --describe --zookeeper node21:2181,node22:2181,node23:2181 --topic TestTopic

4.3 查看已經建立的topic信息

[admin@node21 kafka_2.11-1.1.0]$ kafka-topics.sh --list --zookeeper node21:2181,node22:2181,node23:2181

4.4 測試生產者發送消息

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh 
Read data from standard input and publish it to Kafka. Option Description ------ ----------- --batch-size <Integer: size> Number of messages to send in a single batch if they are not being sent synchronously. (default: 200) --broker-list <String: broker-list>  REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2. --compression-codec [String: The compression codec: either 'none', compression-codec] 'gzip', 'snappy', or 'lz4'.If specified without value, then it defaults to 'gzip' --key-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing keys. (default: kafka. serializer.DefaultEncoder) --line-reader <String: reader_class> The class name of the class to use for reading lines from standard in. By default each line is read as a separate message. (default: kafka. tools. ConsoleProducer$LineMessageReader) --max-block-ms <Long: max block on The max time that the producer will send> block for during a send request (default: 60000) --max-memory-bytes <Long: total memory The total memory used by the producer in bytes> to buffer records waiting to be sent to the server. (default: 33554432) --max-partition-memory-bytes <Long: The buffer size allocated for a memory in bytes per partition> partition. When records are received which are smaller than this size the producer will attempt to optimistically group them together until this size is reached. (default: 16384) --message-send-max-retries <Integer> Brokers can fail receiving the message for multiple reasons, and being unavailable transiently is just one of them. This property specifies the number of retires before the producer give up and drop this message. (default: 3) --metadata-expiry-ms <Long: metadata The period of time in milliseconds expiration interval> after which we force a refresh of metadata even if we haven't seen any leadership changes. (default: 300000) --old-producer Use the old producer implementation. --producer-property <String: A mechanism to pass user-defined producer_prop> properties in the form key=value to the producer. --producer.config <String: config file> Producer config properties file. Note that [producer-property] takes precedence over this config. --property <String: prop> A mechanism to pass user-defined properties in the form key=value to the message reader. This allows custom configuration for a user- defined message reader. --queue-enqueuetimeout-ms <Integer: Timeout for event enqueue (default: queue enqueuetimeout ms> 2147483647) --queue-size <Integer: queue_size> If set and the producer is running in asynchronous mode, this gives the maximum amount of messages will queue awaiting sufficient batch size. (default: 10000) --request-required-acks <String: The required acks of the producer request required acks> requests (default: 1) --request-timeout-ms <Integer: request The ack timeout of the producer timeout ms> requests. Value must be non-negative and non-zero (default: 1500) --retry-backoff-ms <Integer> Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. (default: 100) --socket-buffer-size <Integer: size> The size of the tcp RECV size. (default: 102400) --sync If set message send requests to the brokers are synchronously, one at a time as they arrive. --timeout <Integer: timeout_ms> If set and the producer is running in asynchronous mode, this gives the maximum amount of time a message will queue awaiting sufficient batch size. The value is given in ms. (default: 1000) --topic <String: topic> REQUIRED: The topic id to produce messages to. --value-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing values. (default: kafka. serializer.DefaultEncoder) 

在node21上生產消息

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh --broker-list node21:9092,node22:9092,node23:9092 --topic TestTopic

4.5 測試消費者消費消息

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-consumer.sh
The console consumer is a tool that reads data from Kafka and outputs it to standard output. Option Description ------ ----------- --blacklist <String: blacklist> Blacklist of topics to exclude from consumption. --bootstrap-server <String: server to REQUIRED (unless old consumer is connect to> used): The server to connect to. --consumer-property <String: A mechanism to pass user-defined consumer_prop> properties in the form key=value to the consumer. --consumer.config <String: config file> Consumer config properties file. Note that [consumer-property] takes precedence over this config. --csv-reporter-enabled If set, the CSV metrics reporter will be enabled --delete-consumer-offsets If specified, the consumer path in zookeeper is deleted when starting up --enable-systest-events Log lifecycle events of the consumer in addition to logging consumed messages. (This is specific for system tests.) --formatter <String: class> The name of a class to use for formatting kafka messages for display. (default: kafka.tools. DefaultMessageFormatter) --from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. --group <String: consumer group id> The consumer group id of the consumer. --isolation-level <String> Set to read_committed in order to filter out transactional messages which are not committed. Set to read_uncommittedto read all messages. (default: read_uncommitted) --key-deserializer <String: deserializer for key> --max-messages <Integer: num_messages> The maximum number of messages to consume before exiting. If not set, consumption is continual. --metrics-dir <String: metrics If csv-reporter-enable is set, and directory> this parameter isset, the csv metrics will be output here --new-consumer Use the new consumer implementation. This is the default, so this option is deprecated and will be removed in a future release. --offset <String: consume offset> The offset id to consume from (a non- negative number), or 'earliest' which means from beginning, or 'latest' which means from end (default: latest) --partition <Integer: partition> The partition to consume from. Consumption starts from the end of the partition unless '--offset' is specified. --property <String: prop> The properties to initialize the message formatter. --skip-message-on-error If there is an error when processing a message, skip it instead of halt. --timeout-ms <Integer: timeout_ms> If specified, exit if no message is available for consumption for the specified interval. --topic <String: topic> The topic id to consume on. --value-deserializer <String: deserializer for values> --whitelist <String: whitelist> Whitelist of topics to include for consumption. --zookeeper <String: urls> REQUIRED (only when using old consumer): The connection string for the zookeeper connection in the form host:port. Multiple URLS can be given to allow fail-over. 

在node22上消費消息(舊命令操做)

[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --zookeeper node21:2181,node22:2181,node23:2181  --from-beginning --topic TestTopic

--from-beginning:會把TestTopic主題中以往全部的數據都讀取出來。根據業務場景選擇是否增長該配置。

新消費者命令

[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --bootstrap-server node21:9092,node22:9092,node23:9092  --from-beginning --topic TestTopic

4.6刪除topic

[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --zookeeper node21:2181,node22:2181,node23:2181  --delete --topic TestTopic

須要server.properties中設置delete.topic.enable=true不然只是標記刪除或者直接重啓。

4.7 中止Kafka服務

[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-stop.sh stop

4.8 編寫kafka啓動腳本

[admin@node21 kafka_2.11-1.1.0]$ cd bin
[admin@node21 bin]$ vi start-kafka.sh #!/bin/bash nohup /opt/module/kafka_2.11-1.1.0/bin/kafka-server-start.sh /opt/module/kafka_2.11-1.1.0/config/server.properties >/opt/module/kafka_2.11-1.1.0/logs/kafka.log 2>&1 &

賦權限給腳本:chmod +x start-kafka.sh

Kafka配置信息詳解

5.1 Broker配置信息

屬性

默認值

描述

broker.id

 

必填參數,broker的惟一標識

log.dirs

/tmp/kafka-logs

Kafka數據存放的目錄。能夠指定多個目錄,中間用逗號分隔,當新partition被建立的時會被存放到當前存放partition最少的目錄。

port

9092

BrokerServer接受客戶端鏈接的端口號

zookeeper.connect

null

Zookeeper的鏈接串,格式爲:hostname1:port1,hostname2:port2,hostname3:port3。能夠填一個或多個,爲了提升可靠性,建議都填上。注意,此配置容許咱們指定一個zookeeper路徑來存放此kafka集羣的全部數據,爲了與其餘應用集羣區分開,建議在此配置中指定本集羣存放目錄,格式爲:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。須要注意的是,消費者的參數要和此參數一致。

message.max.bytes

1000000

服務器能夠接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一致,不然會由於生產者生產的消息太大致使消費者沒法消費。

num.io.threads

8

服務器用來執行讀寫請求的IO線程數,此參數的數量至少要等於服務器上磁盤的數量。

queued.max.requests

500

I/O線程能夠處理請求的隊列大小,若實際請求數超過此大小,網絡線程將中止接收新的請求。

socket.send.buffer.bytes

100 * 1024

The SO_SNDBUFF buffer the server prefers for socket connections.

socket.receive.buffer.bytes

100 * 1024

The SO_RCVBUFF buffer the server prefers for socket connections.

socket.request.max.bytes

100 * 1024 * 1024

服務器容許請求的最大值, 用來防止內存溢出,其值應該小於 Java heap size.

num.partitions

1

默認partition數量,若是topic在建立時沒有指定partition數量,默認使用此值,建議改成5

log.segment.bytes

1024 * 1024 * 1024

Segment文件的大小,超過此值將會自動新建一個segment,此值能夠被topic級別的參數覆蓋。

log.roll.{ms,hours}

24 * 7 hours

新建segment文件的時間,此值能夠被topic級別的參數覆蓋。

log.retention.{ms,minutes,hours}

7 days

Kafka segment log的保存週期,保存週期超過此時間日誌就會被刪除。此參數能夠被topic級別參數覆蓋。數據量大時,建議減少此值。

log.retention.bytes

-1

每一個partition的最大容量,若數據量超過此值,partition數據將會被刪除。注意這個參數控制的是每一個partition而不是topic。此參數能夠被log級別參數覆蓋。

log.retention.check.interval.ms

5 minutes

刪除策略的檢查週期

auto.create.topics.enable

true

自動建立topic參數,建議此值設置爲false,嚴格控制topic管理,防止生產者錯寫topic。

default.replication.factor

1

默認副本數量,建議改成2。

replica.lag.time.max.ms

10000

在此窗口時間內沒有收到follower的fetch請求,leader會將其從ISR(in-sync replicas)中移除。

replica.lag.max.messages

4000

若是replica節點落後leader節點此值大小的消息數量,leader節點就會將其從ISR中移除。

replica.socket.timeout.ms

30 * 1000

replica向leader發送請求的超時時間。

replica.socket.receive.buffer.bytes

64 * 1024

The socket receive buffer for network requests to the leader for replicating data.

replica.fetch.max.bytes

1024 * 1024

The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.

replica.fetch.wait.max.ms

500

The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.

num.replica.fetchers

1

Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.

fetch.purgatory.purge.interval.requests

1000

The purge interval (in number of requests) of the fetch request purgatory.

zookeeper.session.timeout.ms

6000

ZooKeeper session 超時時間。若是在此時間內server沒有向zookeeper發送心跳,zookeeper就會認爲此節點已掛掉。 此值過低致使節點容易被標記死亡;若過高,.會致使太遲發現節點死亡。

zookeeper.connection.timeout.ms

6000

客戶端鏈接zookeeper的超時時間。

zookeeper.sync.time.ms

2000

H ZK follower落後 ZK leader的時間。

controlled.shutdown.enable

true

容許broker shutdown。若是啓用,broker在關閉本身以前會把它上面的全部leaders轉移到其它brokers上,建議啓用,增長集羣穩定性。

auto.leader.rebalance.enable

true

If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the 「preferred」 replica for each partition if it is available.

leader.imbalance.per.broker.percentage

10

The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.

leader.imbalance.check.interval.seconds

300

The frequency with which to check for leader imbalance.

offset.metadata.max.bytes

4096

The maximum amount of metadata to allow clients to save with their offsets.

connections.max.idle.ms

600000

Idle connections timeout: the server socket processor threads close the connections that idle more than this.

num.recovery.threads.per.data.dir

1

The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

unclean.leader.election.enable

true

Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

delete.topic.enable

false

啓用deletetopic參數,建議設置爲true。

offsets.topic.num.partitions

50

The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).

offsets.topic.retention.minutes

1440

Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.

offsets.retention.check.interval.ms

600000

The frequency at which the offset manager checks for stale offsets.

offsets.topic.replication.factor

3

The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.

offsets.topic.segment.bytes

104857600

Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.

offsets.load.buffer.size

5242880

An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.

offsets.commit.required.acks

-1

The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.

offsets.commit.timeout.ms

5000

The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

5.2 Producer配置信息

屬性

默認值

描述

metadata.broker.list

 

啓動時producer查詢brokers的列表,能夠是集羣中全部brokers的一個子集。注意,這個參數只是用來獲取topic的元信息用,producer會從元信息中挑選合適的broker並與之創建socket鏈接。格式是:host1:port1,host2:port2。

request.required.acks

0

參見3.2節介紹

request.timeout.ms

10000

Broker等待ack的超時時間,若等待時間超過此值,會返回客戶端錯誤信息。

producer.type

sync

同步異步模式。async表示異步,sync表示同步。若是設置成異步模式,能夠容許生產者以batch的形式push數據,這樣會極大的提升broker性能,推薦設置爲異步。

serializer.class

kafka.serializer.DefaultEncoder

序列號類,.默認序列化成 byte[] 。

key.serializer.class

 

Key的序列化類,默認同上。

partitioner.class

kafka.producer.DefaultPartitioner

Partition類,默認對key進行hash。

compression.codec

none

指定producer消息的壓縮格式,可選參數爲: 「none」, 「gzip」 and 「snappy」。關於壓縮參見4.1節

compressed.topics

null

啓用壓縮的topic名稱。若上面參數選擇了一個壓縮格式,那麼壓縮僅對本參數指定的topic有效,若本參數爲空,則對全部topic有效。

message.send.max.retries

3

Producer發送失敗時重試次數。若網絡出現問題,可能會致使不斷重試。

retry.backoff.ms

100

Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.

topic.metadata.refresh.interval.ms

600 * 1000

The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed

queue.buffering.max.ms

5000

啓用異步模式時,producer緩存消息的時間。好比咱們設置成1000時,它會緩存1秒的數據再一次發送出去,這樣能夠極大的增長broker吞吐量,但也會形成時效性的下降。

queue.buffering.max.messages

10000

採用異步模式時producer buffer 隊列裏最大緩存的消息數量,若是超過這個數值,producer就會阻塞或者丟掉消息。

queue.enqueue.timeout.ms

-1

當達到上面參數值時producer阻塞等待的時間。若是值設置爲0,buffer隊列滿時producer不會阻塞,消息直接被丟掉。若值設置爲-1,producer會被阻塞,不會丟消息。

batch.num.messages

200

採用異步模式時,一個batch緩存的消息數量。達到這個數量值時producer纔會發送消息。

send.buffer.bytes

100 * 1024

Socket write buffer size

client.id

「」

The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

5.3 Consumer配置信息

屬性

默認值

描述

group.id

 

Consumer的組ID,相同goup.id的consumer屬於同一個組。

zookeeper.connect

 

Consumer的zookeeper鏈接串,要和broker的配置一致。

consumer.id

null

若是不設置會自動生成。

socket.timeout.ms

30 * 1000

網絡請求的socket超時時間。實際超時時間由max.fetch.wait + socket.timeout.ms 肯定。

socket.receive.buffer.bytes

64 * 1024

The socket receive buffer for network requests.

fetch.message.max.bytes

1024 * 1024

查詢topic-partition時容許的最大消息大小。consumer會爲每一個partition緩存此大小的消息到內存,所以,這個參數能夠控制consumer的內存使用量。這個值應該至少比server容許的最大消息大小大,以避免producer發送的消息大於consumer容許的消息。

num.consumer.fetchers

1

The number fetcher threads used to fetch data.

auto.commit.enable

true

若是此值設置爲true,consumer會週期性的把當前消費的offset值保存到zookeeper。當consumer失敗重啓以後將會使用此值做爲新開始消費的值。

auto.commit.interval.ms

60 * 1000

Consumer提交offset值到zookeeper的週期。

queued.max.message.chunks

2

用來被consumer消費的message chunks 數量, 每一個chunk能夠緩存fetch.message.max.bytes大小的數據量。

auto.commit.interval.ms

60 * 1000

Consumer提交offset值到zookeeper的週期。

queued.max.message.chunks

2

用來被consumer消費的message chunks 數量, 每一個chunk能夠緩存fetch.message.max.bytes大小的數據量。

fetch.min.bytes

1

The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.

fetch.wait.max.ms

100

The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.

rebalance.backoff.ms

2000

Backoff time between retries during rebalance.

refresh.leader.backoff.ms

200

Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.

auto.offset.reset

largest

What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer

consumer.timeout.ms

-1

若在指定時間內沒有消息消費,consumer將會拋出異常。

exclude.internal.topics

true

Whether messages from internal topics (such as offsets) should be exposed to the consumer.

zookeeper.session.timeout.ms

6000

ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.

zookeeper.connection.timeout.ms

6000

The max time that the client waits while establishing a connection to zookeeper.

zookeeper.sync.time.ms

2000

How far a ZK follower can be behind a ZK leader

相關文章
相關標籤/搜索