I have a Kstream application that consume from a topic and inserting to a table. At the testing we noticed that when we publish duplicate data to topic the consumer application failing to insert to the table due to unique constrain violation. After few minutes Kstream consumers are continuously rebalancing. So why the rebalance is triggering when db error rate is high ?
It is normal that you get rebalancing because the kafka-streams task fails when the exception is thrown.
Kafka-streams will continously try to start a new task in order to process your duplicate message, that each time throws an exception, and so on..
A try .. catch of the exception that detects the duplicate, or a kafka-streams exception handler might help.