With Kafka we push all events relating to an entity (i.e a Customer) to a single partition. This is extremely useful because it preserves ordering for that entity, and gives us scalability.
The challenge we have is that in our platform, we can merge one entity into another: Customer A can be merged into Customer B. This means that everything associated with A should now be associated with B.
This is awkward from a partitioning ordering perspective, because both A and B are now considered a single entity, and yet, they could still have events in play across two partitions where ordering is not preserved.
We face domain issues if new events for Customer B are consumed, before the final events for (the now obselete) Customer A have been consumed.
How have others faced this challenge, where an entity is effectively 'migrating' across partitions?
According to the Kafka streams join documentation, you'd need to repartition the topic.
Otherwise, you'd build a separate kv store... One of customer A in a KTable, then join against an stream of customer b