容器kafka0.10集群重启一台,生产没问题,没有消费,业务重启正常

Beyond 发表于: 2020-08-07   最后更新时间: 2020-08-11  
  1. 容器kafka0.10版本,3台集群,重启了其中一台,生产没问题,但没有消费,导致堆积,业务重启后正常消费

  2. 下面是topic的信息,LITTLEC_MSGGW_TASK是业务topic

    root@littlec-kafka-0:/usr/bin# ./kafka-topics --describe --zookeeper 10.42.0.215:2181 --topic LITTLEC_MSGGW_TASK
    Topic: LITTLEC_MSGGW_TASK       PartitionCount: 12      ReplicationFactor: 2    Configs: retention.ms=86400000
         Topic: LITTLEC_MSGGW_TASK       Partition: 0    Leader: 1       Replicas: 1,2   Isr: 2,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 1    Leader: 2       Replicas: 2,0   Isr: 2,0
         Topic: LITTLEC_MSGGW_TASK       Partition: 2    Leader: 0       Replicas: 0,1   Isr: 0,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 3    Leader: 1       Replicas: 1,2   Isr: 2,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 4    Leader: 2       Replicas: 2,0   Isr: 2,0
         Topic: LITTLEC_MSGGW_TASK       Partition: 5    Leader: 0       Replicas: 0,1   Isr: 0,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 6    Leader: 1       Replicas: 1,0   Isr: 0,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 7    Leader: 2       Replicas: 2,1   Isr: 2,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 8    Leader: 0       Replicas: 0,2   Isr: 2,0
         Topic: LITTLEC_MSGGW_TASK       Partition: 9    Leader: 1       Replicas: 1,2   Isr: 2,1
         Topic: LITTLEC_MSGGW_TASK       Partition: 10   Leader: 2       Replicas: 2,0   Isr: 2,0
         Topic: LITTLEC_MSGGW_TASK       Partition: 11   Leader: 0       Replicas: 0,1   Isr: 0,1
    root@littlec-kafka-0:/usr/bin# ./kafka-topics --describe --zookeeper 10.42.0.215:2181 --topic __consumer_offsets
    Topic: __consumer_offsets       PartitionCount: 50      ReplicationFactor: 3    Configs: compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
         Topic: __consumer_offsets       Partition: 0    Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 1    Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 2    Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 3    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 4    Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 5    Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 6    Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 7    Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 8    Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 9    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 10   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 11   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 12   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 13   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 14   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 15   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 16   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 17   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 18   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 19   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 20   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 21   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 22   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 23   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 24   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 25   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 26   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 27   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 28   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 29   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 30   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 31   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 32   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 33   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 34   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 35   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 36   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 37   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 38   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 39   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 40   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 41   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 42   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 43   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 44   Leader: 1       Replicas: 1,0,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 45   Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 46   Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 47   Leader: 1       Replicas: 1,2,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 48   Leader: 2       Replicas: 2,1,0 Isr: 2,0,1
         Topic: __consumer_offsets       Partition: 49   Leader: 0       Replicas: 0,2,1 Isr: 2,0,1
    


您需要解锁本帖隐藏内容请: 点击这里
本帖隐藏的内容




上一条: kafka如果用的是createTime这种类型,那么假如消息乱序到达broker,那么是否是报错?
下一条: java访问docker-compose启动的kafka,出现超时

  • 要看下对应分区的消费组情况。

    ## 显示某个消费组的消费详情(0.9版本 - 0.10.1.0 之前)
    bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --describe --group test-consumer-group
    
    ## 显示某个消费组的消费详情(0.10.1.0版本+)
    bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
    

    另外,现在消费者业务重启之后,也看不出来了。
    主要是为了看是否消费者组是否有消费者占用分区,但并没有消费,如果是这种,很可能是业务客户端程序没有重新平衡消费者。

    • 业务客户端程序没有重新平衡消费者?这个在创建消费者的时候,需要传什么配置吗,我现在的消费者客户端配置如下:

      properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, littlecConfig.getConfig(KAFKA_HOST));
      properties.put(ConsumerConfig.GROUP_ID_CONFIG, KAFKA_GROUP);
      properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
      properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
      properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
      

      没有重新平衡消费者,可能是什么原因