Search code examples
javaapache-kafkakafka-consumer-apispring-kafka

How is Kafka offset reset being managed for parallel consumption?


I would like to better understand Kafka message retry process. I have heard that failed processing of consumed messages can be addressed using 2 options:

  1. SeekToCurrentErrorHandler (offset reset)
  2. publishing a message to a Dead Letter Queues (DLQs)

The 2nd option is a pretty clear, that if a message failed to be processed it is simply pushed to an error queue. I am more curious about the first option.

AFAIK, the 1st option is the most widely used one, but how does it work when multiple consumers concurrently consume messages from the same topic? Does it work that if a particular message has failed the offset for the consumer-id is being reset to the message's offset? What will happen with the messages successfully processed simultaneously/after the failed one, will they be re-processed?

How can you advice me to deal with message re-tries?


Solution

  • Each partition can only be consumed by one consumer.

    When you have multiple consumers, you must have at least that number of partitions.

    The offset is maintained for each partition; the error handler will (can) only perform seeks on the partitions that are assigned to this consumer.