今天kafka生产环境突然报错TimeoutException: Expiring 1 record(s) for TOPICNAME-1:963170 ms has passed since batch creation

Home . 发表于: 2020-11-18   最后更新时间: 2020-11-18  

1、之前一直正常运行,突然报的这个错,过了一会又恢复正常了,这条消息从开始发送到报发送失败用了13分钟,想问一下什么原因导致的报错?还有这个963170ms是在哪设置的,能不能定义的短一些,比如一分钟

2、kafka版本:kafka_2.12-2.3.0,zookeerper版本:zookeeper-3.4.10,有三个kafka节点

3、代码是用的springboot的异步发送

//这里是9点3分开始发送
kafkaProducer.sendMsg(topicData).addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
    @Override
    public void onFailure(Throwable ex) {
        //9.19才报这个错
        log.info("消息发送失败!!!}");
    }

    @Override
    public void onSuccess(SendResult<String, Object> result) {
        log.info("消息发送成功");
    }
});

4、kafka配置如下:

broker.id=0

auto.create.topics.enable=false
delete.topic.enable=true
listeners=SASL_PLAINTEXT://ip:9092
advertised.listeners=SASL_PLAINTEXT://ip:9092
request.timeout.ms=50000

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
allow.everyone.if.no.acl.found=true

default.replication.factor=2
num.network.threads=8
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/data/kafka_2.12-2.3.0/mykafkalogs

num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=ip:2181,ip2:2181,ip3:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

5、 代码Kafka 配置项:

spring:
  kafka:
    bootstrap-servers: ip:9092,ip2:9092,ip3:9092

    producer:
      acks: -1
      retries: 1
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      properties:
        max.block.ms: 40000
        linger.ms: 1000
      batch-size: 162840
      buffer-memory: 33554432

    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      auto-offset-reset: earliest
    listener:
      missing-topics-fatal: false
    security:
      protocol: SASL_PLAINTEXT
    properties:
      sasl.mechanism: PLAIN


您需要解锁本帖隐藏内容请: 点击这里
本帖隐藏的内容




上一条: kafka 集群启动后,系统缓存一直增大
下一条: 虚拟机CentOS 7下的docker(CentOS 7)内的kafka消费者无法消费到来自windows下Java客户端(IDE)生产者生产的消息

  • 1、request.timeout.ms
    2、一般这种是网络阻断或抖动引起的(ps:但是生产抖动没有那么长时间的,而且我看你配置重试了一次,并且你配置的ack=-1,需要所有副本应答,如果你副本太多,集群压力网络和io压力巨大)。

    • 还有个问题问一下,leader节点与follower同步是不是有个周期,并不是立即同步的?比如2分钟同步一次,或者消息大小达到多少时同步一次?