kafka socket.request.max.bytes=104857600,为什么larger than 524288?

Icuiacd 发表于: 2019-05-22   最后更新时间: 2019-05-22  

提问说明

socket.request.max.bytes=104857600,配置了这个,但是报错是这个org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 218762506 larger than 524288),为什么是524288

报错信息

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 218762506 larger than 524288)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
    at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:257)
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:486)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:424)
    at kafka.network.Processor.poll(SocketServer.scala:628)
    at kafka.network.Processor.run(SocketServer.scala:545)
    at java.lang.Thread.run(Thread.java:748)


您需要解锁本帖隐藏内容请: 点击这里
本帖隐藏的内容





发表于: 3月前   最后更新时间: 3月前   游览量:764
上一条: 使用KafkaConsumer的API不能消费消息
下一条: 5台kafka集群有一台机器的消息流入量远远落后于其他几个机器

  • WARN Unexpected error from /10.144.100.176; closing connection (org.apache.kafka.common.network.Selector)
    org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:91)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
    at kafka.network.Processor.poll(SocketServer.scala:476)
    at kafka.network.Processor.run(SocketServer.scala:416)
    at java.lang.Thread.run(Thread.java:748)
    在公司主机部署是好的。换了台就不行了

    • kafka启动脚本kafka-server-start.sh中指定了kafka启动时需要的最小内存,默认为1G
      export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

      修改脚本 kafka-server-start.sh 中的最小启动内存。

        message.max.bytes 这个呢?

        • message.max.bytes = 1000012,启动日志是这个,我刚自己压测也是org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 104957440 larger than 104857600),说明104857600这个没问题吧,看启动日志,和524288有关的只有:log.cleaner.io.buffer.size = 524288
          offsets.load.buffer.size = 5242880
          transaction.state.log.load.buffer.size = 5242880这三个

            • 出现524288是集群其他的生产者,我测试的调整buffer.memory batch.size max.request.size都是104957800,大于104857600,每次发送1000byte的数据,同一个分区99999999次

                • WARN Unexpected error from /10.144.100.176; closing connection (org.apache.kafka.common.network.Selector)
                  org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
                  at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:91)
                  at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
                  at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
                  at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
                  at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
                  at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
                  at kafka.network.Processor.poll(SocketServer.scala:476)
                  at kafka.network.Processor.run(SocketServer.scala:416)
                  at java.lang.Thread.run(Thread.java:748)
                  在公司主机部署是好的,换台生产的就不行了。一直报这个。生产上面试了六台,同样问题。求大神看下

                    broker.id=1
                    listeners=SASL_PLAINTEXT://。。。。。。。。。。。。。。
                    security.inter.broker.protocol=SASL_PLAINTEXT
                    sasl.mechanism.inter.broker.protocol=PLAIN
                    sasl.enabled.mechanisms=PLAIN
                    super.users=。。。。。。。。。。。。。。
                    authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
                    num.network.threads=3
                    num.io.threads=8
                    auto.create.topics.enable=false
                    socket.send.buffer.bytes=102400
                    socket.receive.buffer.bytes=102400
                    socket.request.max.bytes=104857600
                    max.connections.per.ip=500
                    log.dirs=。。。。。。。。。。。。。。
                    num.partitions=3
                    num.recovery.threads.per.data.dir=1
                    default.replication.factor=3
                    log.retention.hours=768
                    log.retention.bytes=1099511627776
                    log.segment.bytes=1073741824
                    log.retention.check.interval.ms=300000
                    log.cleaner.enable=false
                    zookeeper.connect=。。。。。。。。。。。。。。
                    zookeeper.connection.timeout.ms=6000
                    password.encoder.secret=null
                    部分信息隐去了

                    一直配的是这个大小,期间重启过好多次

                    重启了吗。。