Does a SELECT
query following an INSERT … ON CONFLICT DO NOTHING
statement always find a row, given the default transaction isolation (read committed)?
I want to INSERT
-or-SELECT
a row in one table, then reference that when inserting rows in a second table. Since RETURNING
doesn't work well with ON CONFLICT
, I have so far used a simple CTE that should always give me the identity column value even if the row already exists:
$id = query(
`WITH ins AS (
INSERT INTO object (scope, name)
VALUES ($1, $2)
ON CONFLICT (scope, name) DO NOTHING
RETURNING id
)
SELECT id FROM ins
UNION ALL
SELECT id FROM object WHERE scope = $1 AND name = $2
LIMIT 1;`,
[$scope, $name]
)
query(
`INSERT INTO object_member (object_id, key, value)
SELECT $1, UNNEST($2::text[]), UNNEST($3::int[]);`
[$id, $keys, $values]
)
However, I learned that this CTE is not entirely safe under concurrent write load, where it can happen that both the upsert and the select come up empty when a different transaction does insert the same row.
In the answers there (and also here) it is suggested to use another query to do the SELECT
:
start a new command (in the same transaction), which then can see these conflicting rows from the previous query.
If I understand correctly, it would mean to do
$id = query(
`INSERT INTO object (scope, name)
VALUES ($1, $2)
ON CONFLICT (scope, name) DO NOTHING
RETURNING id;`,
[$scope, $name]
)
if not $id:
$id = query(
`SELECT id FROM object WHERE scope = $1 AND name = $2;`
[$scope, $name]
)
query(
`INSERT INTO object_member (object_id, key, value)
SELECT $1, UNNEST($2::text[]), UNNEST($3::int[]);`
[$id, $keys, $values]
)
or even shortened to
query(
`INSERT INTO object (scope, name)
VALUES ($1, $2)
ON CONFLICT (scope, name) DO NOTHING;`,
[$scope, $name]
)
query(
`INSERT INTO object_member (object_id, key, value)
SELECT (SELECT id FROM object WHERE scope = $1 AND name = $2), UNNEST($3::text[]), UNNEST($3::int[]);`
[$scope, $name, $keys, $values]
)
I believe this would be enough to prevent that particular race condition (dubbed "concurrency issue 1" in this answer) - but I'm not 100% certain not to have missed anything.
Also what about "concurrency issue 2"? If I understand correctly, this is about another transaction deleting or updating the existing row, inbetween the INSERT
and SELECT
statements - and it's more likely to happen when using multiple queries instead of the CTE approach. How exactly should I deal with that? I assume locking the SELECT
with FOR KEY SHARE
is necessary in the second code snippet - but would I also need that in the third snippet where the id
is used within the same query? If it helps to simplify the answer, let's assume an object
can only be inserted or deleted, but is never updated.
To make absolutely sure that the single row in the first table is there, and it's ID returned, you could create a function like outlined here:
To make sure the row also stays there for the duration of the transaction, just make sure it's locked. If you INSERT
the row, it's locked anyway. If you SELECT
an existing id
, you have to lock it explicitly - just like you suggested. FOR KEY SHARE
is strong enough for our purpose as long as there is a (non-partial, non-functional) UNIQUE
index on (scope, name)
, which is safe to assume given your ON CONFLICT
clause.
CREATE OR REPLACE FUNCTION f_object_id(_scope text, _name text, OUT _object_id int)
LANGUAGE plpgsql AS
$func$
BEGIN
LOOP
SELECT id FROM object
WHERE scope = $1
AND name = $2
-- lock to prevent deletion in the tiny time frame before the next INSERT
FOR KEY SHARE
INTO _object_id;
EXIT WHEN FOUND;
INSERT INTO object AS o (scope, name)
VALUES ($1, $2)
ON CONFLICT (scope, name) DO NOTHING
RETURNING o.id
INTO _object_id;
EXIT WHEN FOUND;
END LOOP;
END
$func$;
You really only need to lock the row if it's conceivable that a concurrent transaction might DELETE
it (you don't UPDATE
) in the tiny time frame between the SELECT
and the next INSERT
statement.
Also, if you have a FOREIGN KEY
constraint from object_member.object_id
to object.id
(which seems likely), referential integrity is guaranteed anyway. If you don't add the explicit lock, and the row is deleted in between, you get a foreign key violation, and the INSERT
to object_member
is cancelled, along with the whole transaction. Else, the other transaction with the DELETE
has to wait until your transaction is done, and is then cancelled by the same FK constraint since depending rows are now there (unless it's defined to CASCADE
...) So by locking (or not) you can decide whether to prevent the DELETE
or the INSERT
in this scenario.
Then your call burns down to just:
query(
`WITH o(id) AS (SELECT f_object_id($1, $2))
INSERT INTO object_member (object_id, key, value)
SELECT o.id, UNNEST($3::text[]), UNNEST($4::int[])
FROM o;`
[$scope, $name, $keys, $values]
)
Since you obviously insert multiple rows into object_member
, I moved f_object_id($1, $2)
to a CTE to avoid repeated execution - which would work, but pointlessly expensive.
In Postgres 12 or later I would make that explicit by adding MATERIALIZED
(since the INSERT
is hidden in a function):
WITH o(id) AS MATERIALIZED (SELECT f_object_id($1, $2)) ...
Aside: For the multiple unnest()
in the SELECT
list, make sure you are on Postgres 10 or later. See:
Will it make any difference (apart from execution time) to do this in the application logic with multiple queries in the same transaction?
Basically no. The only difference is performance. Well, and short code and reliability. It's objectively more error prone to go back and forth between db and client for each loop. But unless you have extremely competitive transactions, you would hardly ever be looping anyway.
The other consideration is this: the matter is tricky, and most developers do not understand it. Encapsulated in a server-side function, it's less likely to be broken by the next application programmer (or yourself). You have to make sure that it's actually used, too. Either way, properly document the reasons you are doing it one way or another ...
I really wonder whether my second snippet is safe, or why not (given the quote about visibility in the
SELECT
after theINSERT
).
Mostly safe, but not absolutely. While the next separate SELECT
will see (now committed) rows of a transactions competing with the previous UPSERT, there is nothing to keep a third transaction from deleting it again in the meantime. The row has not been locked, and you have no way to do that while it's not visible, and there is no generic predicate locking available in Postgres.
Consider this (T1, T2, T3 are concurrent transactions):
T2: BEGIN transaction
T1: BEGIN transaction
T2: INSERT object 666
T1: UPSERT object 666
unique violation?
-> wait for T2
T2: COMMIT
T1: unique violation -> NO ACTION
finish statement
can't return invisible object 666
T3: DELETE object 666 & COMMIT
T1: SELECT object 666 -> no row!
BOOM!
Typically it's extremely unlikely that it ever happens.
But it's possible. Hence the loop.
The other option is SERIALIZABLE
transaction isolation. Typically more expensive, and you need to prepare for serialization failures. Catch 22.