kafka 死循环无限重置offset

vastsky 发表于: 2020-06-28   最后更新时间: 2020-06-28  

这是spark submit程序显示的日志

20/06/28 15:39:40 INFO Executor: Running task 0.0 in stage 41.0 (TID 41) [Executor task launch worker for task 41]
20/06/28 15:39:40 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks [Executor task launch worker for task 41]
20/06/28 15:39:40 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms [Executor task launch worker for task 41]
20/06/28 15:39:50 INFO Fetcher: [Consumer clientId=consumer-1, groupId=headlinesRank] Resetting offset for partition nginx01-0 to offset 75. [JobGenerator]
20/06/28 15:39:50 INFO JobScheduler: Added jobs for time 1593329990000 ms [JobGenerator]
20/06/28 15:40:00 INFO Fetcher: [Consumer clientId=consumer-1, groupId=headlinesRank] Resetting offset for partition nginx01-0 to offset 75. [JobGenerator]
20/06/28 15:40:00 INFO JobScheduler: Added jobs for time 1593330000000 ms [JobGenerator]
20/06/28 15:40:10 INFO Fetcher: [Consumer clientId=consumer-1, groupId=headlinesRank] Resetting offset for partition nginx01-0 to offset 75. [JobGenerator]
20/06/28 15:40:10 INFO JobScheduler: Added jobs for time 1593330010000 ms [JobGenerator]
20/06/28 15:40:20 INFO Fetcher: [Consumer clientId=consumer-1, groupId=headlinesRank] Resetting offset for partition nginx01-0 to offset 75. [JobGenerator]
20/06/28 15:40:20 INFO JobScheduler: Added jobs for time 1593330020000 ms [JobGenerator]

kafka报的日志

[2020-06-28 14:54:46,288] INFO [GroupCoordinator 0]: Preparing to rebalance group test2 with old generation 1 (__consumer_offsets-38) (kafka.coordinator.group.GroupCoordinator)
[2020-06-28 14:54:46,290] INFO [GroupCoordinator 0]: Group test2 with generation 2 is now empty (__consumer_offsets-38) (kafka.coordinator.group.GroupCoordinator)
[2020-06-28 14:57:17,527] INFO [GroupMetadataManager brokerId=0] Group test2 transitioned to Dead in generation 2 (kafka.coordinator.group.GroupMetadataManager)

为了防止出现rebalance起了 一个consumer 一个partition 和无备份 但是还是会出现死循环,kafka输出此日志后便没有多余之日输出 留下spark程序循环重置offset且无法消费

auto.offset.reset是 latest enable.auto.commit是 false group.id 是headlinesRank bootstrap.servers是host:port(测试是通的)

现象
是spark submit消费kafka无限重置offset 且查看consumer_group中无信息显示



您需要解锁本帖隐藏内容请: 点击这里
本帖隐藏的内容




上一条: zookeeper运行正常的情况下,无缘无故的挂掉,这是怎么回事??
下一条: kafka_2.13-2.50查询不到topic信息

  • [Unauthorized System] root@node-abd-003:/home/abd/cluster/kafka/bin# ./kafka-consumer-groups.sh --bootstrap-server 192.168.40.19:9092 --list
    console-consumer-89559
    console-consumer-78441
    headlinesRank
    [Note] System unauthorized, Please contact the system supplier.
    [Unauthorized System] root@node-abd-003:/home/abd/cluster/kafka/bin# ./kafka-consumer-groups.sh --bootstrap-server 192.168.40.19:9092 --group headlinesRank --describe
    

    这是在交互式命令查看的consumer-group详情

    • 看看lag

      ## 显示某个消费组的消费详情(0.10.1.0版本+)
      bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
      

        enable.auto.commit是false?
        你是手动提交的offset吗?