半兽人 发表于: 2015-03-10   最后更新时间: 2022-10-18 11:48:33  
{{totalSubscript}} 订阅, 32,794 游览

Now that we understand a little about how producers and consumers work, let's discuss the semantic guarantees Kafka provides between producer and consumer. Clearly there are multiple possible message delivery guarantees that could be provided:

  • At most once—Messages may be lost but are never redelivered.
    最多一次 --- 消息可能丢失,但绝不会重发。
  • At least once—Messages are never lost but may be redelivered.
    至少一次 --- 消息绝不会丢失,但有可能重新发送。
  • Exactly once—this is what people actually want, each message is delivered once and only once.
    正好一次 --- 这是人们真正想要的,每个消息传递一次且仅一次。

It's worth noting that this breaks down into two problems: the durability guarantees for publishing a message and the guarantees when consuming a message.

Many systems claim to provide "exactly once" delivery semantics, but it is important to read the fine print, most of these claims are misleading (i.e. they don't translate to the case where consumers or producers can fail, or cases where there are multiple consumer processes, or cases where data written to disk can be lost).

Kafka's semantics are straight-forward. When publishing a message we have a notion of the message being "committed" to the log. Once a published message is committed it will not be lost as long as one broker that replicates the partition to which this message was written remains "alive". The definition of alive as well as a description of which types of failures we attempt to handle will be described in more detail in the next section. For now let's assume a perfect, lossless broker and try to understand the guarantees to the producer and consumer. If a producer attempts to publish a message and experiences a network error it cannot be sure if this error happened before or after the message was committed. This is similar to the semantics of inserting into a database table with an autogenerated key.
kafka的语义是很直接的,我们有一个概念,当发布一条消息时,该消息 “committed(承诺)” 到了日志,一旦发布的消息是”承诺“的,只要副本分区写入了此消息的一个broker仍然"活着”,它就不会丢失。“活着”的定义以及描述的类型,我们处理失败的情况将在下一节中详细描述。现在让我们假设一个完美的不会丢消息的broker,并去了解如何保障生产者和消费者的,如果一个生产者发布消息并且正好遇到网络错误,就不能确定已提交的消息是否是在这个错误发生之前或之后。这类似于用自动生成key插入到一个数据库表。

Prior to, if a producer failed to receive a response indicating that a message was committed, it had little choice but to resend the message. This provides at-least-once delivery semantics since the message may be written to the log again during resending if the original request had in fact succeeded. Since, the Kafka producer also supports an idempotent delivery option which guarantees that resending will not result in duplicate entries in the log. To achieve this, the broker assigns each producer an ID and deduplicates messages using a sequence number that is sent by the producer along with every message. Also beginning with, the producer supports the ability to send messages to multiple topic partitions using transaction-like semantics: i.e. either all messages are successfully written or none of them are. The main use case for this is exactly-once processing between Kafka topics (described below).
在0.11.0.0之前,如果一个生产者没有收到消息提交的响应,那么只能重新发送消息。 这提供了至少一次传递语义,因为如果原始请求实际上已成功,则在重新发送期间再次将消息写入到日志中。自0.11.0.0起,Kafka生产者支持幂等传递选项,保证重新发送不会导致日志中重复。 broker为每个生产者分配一个ID,并通过生产者发送的序列号为每个消息进行去重。从0.11.0.0开始,生产者支持使用类似事务的语义将消息发送到多个topic分区的能力:即所有消息都被成功写入,或者没有。这个主要用于Kafka topic之间“正好一次“处理(如下所述)。

Not all use cases require such strong guarantees. For uses which are latency sensitive we allow the producer to specify the durability level it desires. If the producer specifies that it wants to wait on the message being committed this can take on the order of 10 ms. However the producer can also specify that it wants to perform the send completely asynchronously or that it wants to wait only until the leader (but not necessarily the followers) have the message.

Now let's describe the semantics from the point-of-view of the consumer. All replicas have the exact same log with the same offsets. The consumer controls its position in this log. If the consumer never crashed it could just store this position in memory, but if the consumer fails and we want this topic partition to be taken over by another process the new process will need to choose an appropriate position from which to start processing. Let's say the consumer reads some messages -- it has several options for processing the messages and updating its position.

  1. It can read the messages, then save its position in the log, and finally process the messages. In this case there is a possibility that the consumer process crashes after saving its position but before saving the output of its message processing. In this case the process that took over processing would start at the saved position even though a few messages prior to that position had not been processed. This corresponds to "at-most-once" semantics as in the case of a consumer failure messages may not be processed.
    读取消息,然后在日志中保存它的位置,最后处理消息。在这种情况下,有可能消费者保存了位置之后,但是处理消息输出之前崩溃了。在这种情况下,接管处理的进程会在已保存的位置开始,即使该位置之前有几个消息尚未处理。这对应于“最多一次” ,在消费者处理失败消息的情况下,不进行处理。

  2. It can read the messages, process the messages, and finally save its position. In this case there is a possibility that the consumer process crashes after processing messages but before saving its position. In this case when the new process takes over the first few messages it receives will already have been processed. This corresponds to the "at-least-once" semantics in the case of consumer failure. In many cases messages have a primary key and so the updates are idempotent (receiving the same message twice just overwrites a record with another copy of itself).

So what about exactly once semantics (i.e. the thing you actually want)? When consuming from a Kafka topic and producing to another topic (as in a Kafka Streams application), we can leverage the new transactional producer capabilities in that were mentioned above. The consumer's position is stored as a message in a topic, so we can write the offset to Kafka in the same transaction as the output topics receiving the processed data. If the transaction is aborted, the consumer's position will revert to its old value and the produced data on the output topics will not be visible to other consumers, depending on their "isolation level." In the default "read_uncommitted" isolation level, all messages are visible to consumers even if they were part of an aborted transaction, but in "read_committed," the consumer will only return messages from transactions which were committed (and any messages which were not part of a transaction).
那么什么是“正好一次”语义(也就是你真正想要的东西)? 当从Kafka主题消费并生产到另一个topic时(例如Kafka Stream),我们可以利用之前提到0.11.0.0中的生产者新事务功能。消费者的位置作为消息存储到topic中,因此我们可以与接收处理后的数据的输出topic使用相同的事务写入offset到Kafka。如果事务中断,则消费者的位置将恢复到老的值,根据其”隔离级别“,其他消费者将不会看到输出topic的生成数据,在默认的”读取未提交“隔离级别中,所有消息对消费者都是可见的,即使是被中断的事务的消息。但是在”读取提交“中,消费者将只从已提交的事务中返回消息。

When writing to an external system, the limitation is in the need to coordinate the consumer's position with what is actually stored as output. The classic way of achieving this would be to introduce a two-phase commit between the storage of the consumer position and the storage of the consumers output. But this can be handled more simply and generally by letting the consumer store its offset in the same place as its output. This is better because many of the output systems a consumer might want to write to will not support a two-phase commit. As an example of this, consider a Kafka Connect connector which populates data in HDFS along with the offsets of the data it reads so that it is guaranteed that either data and offsets are both updated or neither is. We follow similar patterns for many other data systems which require these stronger semantics and for which the messages do not have a primary key to allow for deduplication.
当写入到外部系统时,需要将消费者的位置与实际存储为输出的位置进行协调。实现这一目标的典型方法是在消费者位置的存储和消费者输出的存储之间引入两阶段的”提交“。但是,这可以更简单,通过让消费者将其offset存储在与其输出相同的位置。这样最好,因为大多数的输出系统不支持两阶段”提交“。作为一个例子,考虑一个Kafka Connect连接器,它填充HDFS中的数据以及它读取的数据的offset,以保证数据和offset都被更新,或者都不更新。 对于需要这些更强大语义的许多其他数据系统,我们遵循类似的模式,并且消息不具有允许重复数据删除的主键。

So effectively Kafka guarantees at-least-once delivery by default and allows the user to implement at most once delivery by disabling retries on the producer and committing its offset prior to processing a batch of messages. Exactly-once delivery requires co-operation with the destination storage system but Kafka provides the offset which makes implementing this straight-forward.
kafka默认是保证“至少一次”传递,并允许用户通过禁止生产者重试和处理一批消息前提交它的偏移量来实现 “最多一次”传递。而“正好一次”传递需要与目标存储系统合作,但kafka提供了偏移量,所以实现这个很简单。

更新于 2022-10-18

simple 1年前


半兽人 -> simple 1年前


冰_阔落 2年前


半兽人 -> 冰_阔落 2年前


半夏天南星 3年前



today 6年前

committed(承诺),翻译为"已提交",是不是更好理解?而且和sql中"read committed"常见翻译"读已提交"一致

半兽人 -> today 6年前


一宿梵唱 6年前

我请教一个小问题,partition下的segment file的周期我们是在配置里设置的。那如果一个segment file 里的消息时间间隔相差很大,若第一条消息已经到了保存期限,那kafka是仅删除那一条消息么??后续的消息索引号会往前递增不?后续消息的存储位置会不会顺序往前挪呢??会不会影响后续的segment file里的内容呀??


而且,假设partition和consumer数量(同一个consumer group)是一样的话,那这每个consumer正好连接一个partition了,那么是不是就意味着存在别的partition上的消息这个consumer就不能消费到了。会不会说某个consumer一定要消费到的某条消息就存在别的partition上了,感觉这一块有点奇怪,不是很理解。

半兽人 -> 一宿梵唱 6年前



半兽人 -> 一宿梵唱 6年前

好绕口额,一个组中的消费者数量 <= 分区数。例如:5个消费者,10个分区,那么每个消费者会分到2个分区。如果11个消费者,10个分区,那么将有一个消费者永远分配不到分区(也就拿不到消息)。

默默 6年前

To achieve this, the broker assigns each producer an ID and deduplicates messages using a sequence number that is sent by the producer along with every message。


半兽人 -> 默默 6年前


游牧民族 7年前

This feature is not trivial for a replicated system  对replicated system不是不重要

周亮 7年前

冥等 应为 幂等

半兽人 -> 周亮 7年前