I have a table called Account
with a column name
. There's a unique index on the column. I have a transaction:
begin
create an account with name("ABC")
publish an event to Kafka
end
I wonder if this situation could happen or not:
Request 1: .....transaction start.......create account name("ABC").......publish Kafka event.......commit transaction
Request 2: ..........transaction start.........create account name("ABC").......publish Kafka event.......commit transaction
Both requests happen at the same time. Because the transaction won't see the data changes in the other transaction, one will fail when committing and one will succeed. Will I receive 2 Kafka events?
You will not receive second Kafka event.
Because of unique index on name
attribute the second transaction is forced to wait for the first transaction to complete before it can finish inserting its account with the same name as described in the PostgreSQL documentation:
If a conflicting row has been inserted by an as-yet-uncommitted transaction, the would-be inserter must wait to see if that transaction commits. If it rolls back then there is no conflict. If it commits without deleting the conflicting row again, there is a uniqueness violation. (In practice we just wait for the other transaction to end and then redo the visibility check in toto.)
After the first transaction is commited the second transaction proceeds with insert and raises unique_violation exception like:
SQL Error [23505]: ERROR: duplicate key value violates unique constraint "idx_name" Detail: Key (name)=(ABC) already exists.
The situation would result in such outcome:
Request 1: .....transaction start.......create account name("ABC").......publish Kafka event.......commit transaction
Request 2: ..........transaction start.......create account name("ABC")-----------------------------Error(unique_violation)
(I've added --- to indicate blocking of create account name("ABC")
command).