Kafka java client 鏈接異常(org.apache.kafka.common.errors.TimeoutException: Failed to update metadata )

一、當kafka 客戶端鏈接以下異常java

Exception in thread "main"apache

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
    at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1057)
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:764)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:701)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:609)
    at com.KafkaProducerExample.main(KafkaProducerExample.java:53)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
bootstrap

問題緣由:session

1. java client 包與kafka server 版本不一致測試

二、/kafka_2.11-0.9.0.0/config/server.properties server

listerners 需配置ip ,不能配置主機名,因本地Hosts中不存在對應的Ip配置,致使producer 沒法鏈接ip

官網默認提示 設置host.name advertised.host.name 爲Ip 時能夠經過ip鏈接,但通過測試,此配置失敗。只有經過修改Listerners 後成功。get

三、查看zk中得配置信息,kafka

查看配置  get  /brokers/ids/0   當時ip時,能夠正常鏈接。it

四、java client 鏈接實例(生成者)

Properties props = new Properties();
props.put("bootstrap.servers", "192.168.2.176:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 10);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++) {
    System.out.println(i);
 Future future =  producer.send(new ProducerRecord<String, String>("kafka1", Integer.toString(i), Integer.toString(i)));
    System.out.println(future.get());
   // System.out.println(future.get());
}

producer.close();

消費者實例,不用配置zk鏈接也能夠消費了

public static void main(String[] args) {
    Properties props = new Properties();
    props.put("bootstrap.servers", "192.168.2.176:9092");
    props.put("group.id", "test");
    props.put("enable.auto.commit", "true");
    props.put("auto.commit.interval.ms", "1000");
    props.put("session.timeout.ms", "30000");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
    consumer.subscribe(Arrays.asList("kafka1"));
    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records)
            System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
    }
}

依賴版本

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>0.9.0.0</version>
</dependency>
相關文章
相關標籤/搜索