I want to store entities' information in Apache Kafka compacted topic. Therefore, some values with same keys might be updated.
Suppose that a producer sends a message with the key that consumer has already processed (as I far as I understand, the message will have the same offset as the one with the same key). Is there any approach to reset consumer offset periodically? I use Spring Kafka. I understand that I just can re-run instance with the new group-id to read the topic from the beginning. But I want to know how can I retrieve new values with the same keys when producer sends them to the compacted topic.
In compacted topics each new message has a new offset even if this message have the same key as another one on the topic, after the compaction Kafka will only keep the last messages for each key but without changing their offset.
Don't look at the compaction as some instantaneous action each time you produce a new message, it's a process executed when the topic partition satisfies a few certain conditions, such as dirty ratio, or records in inactive segment files, etc
I invite you to visit this page