Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.
当一个broker停止或崩溃时,这个broker中所有分区的leader将转移给其他副本。这意味着在默认情况下,当这个broker重新启动之后,它的所有分区都将仅作为follower,不再用于客户端的读写操作。
To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:
为了避免这种不平衡,Kafka有一个首选副本的概念。如果一个分区的副本列表是1,5,9,节点1将优先作为其他两个副本5和9的leader,因为它较早存在于副本中。你可以通过运行以下命令让Kafka集群尝试恢复已恢复正常的副本的leader地位:
# kafka版本 <= 2.4
> bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
# kafka新版本
> bin/kafka-preferred-replica-election.sh --bootstrap-server broker_host:port
Since running this command can be tedious you can also configure Kafka to do this automatically by setting the following configuration:
手动运行很无趣,你可以通过这个配置设置为自动执行:
auto.leader.rebalance.enable=true
可能存在的翻译问题:
Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.
当一个broker停止或崩溃时,这个broker中所有分区的leader将转移给其他副本。这意味着在默认情况下,当这个broker重启之后,它的所有分区都将仅作为follower,不再用于客户端的读写操作。
To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:
为了避免这种不平衡,Kafka有一个首选副本的概念。如果一个分区的副本列表是1,5,9,节点1将优先作为其他两个副本5和9的leader,因为它较早存在于副本中。您可以通过运行以下命令让Kafka集群尝试恢复已恢复正常的副本的leader地位:
感谢指正,已优化。