Search code examples
postgresqlquery-optimization

Slow PostgreSQL UPSERT with "Conflict Arbiter Indexes" on non indexed column


The following query takes a long time to execute and I believe it is creating a bottleneck. It tries to UPSERT a row, where the update is on a column with no indexes but with check constraints:

EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
WITH data (cmp_id, action,delta) 
AS (
    VALUES(42,'action1',1),(42,'action2',0.00081991800)
) 
INSERT INTO counters (cmp_id, action, counter, limitval)
SELECT
    data.cmp_id::numeric,
    data.action,
    GREATEST(0, data.delta::numeric),
    0
FROM
    data
ORDER BY
    cmp_id, action 
ON CONFLICT (cmp_id, action)
WHERE (cou_id IS NULL AND ref_id IS NULL AND pro_id IS NULL AND identifier IS NULL)
DO
UPDATE
SET
    counter = counters.counter + (
        SELECT
            data.delta::float
        FROM
            data
        WHERE
            data.cmp_id::bigint = excluded.cmp_id::bigint
            AND data.action::text = excluded.action::text
        LIMIT 1);

Explain analyze gives the following output:

Insert on public.limit_counters  (cost=0.09..0.14 rows=2 width=120) (actual time=3566.376..3566.378 rows=0 loops=1)
  Conflict Resolution: UPDATE
  Conflict Arbiter Indexes: counters_unique_cmp_id
  Tuples Inserted: 0
  Conflicting Tuples: 2
  Buffers: shared hit=258
  CTE data
"    ->  Values Scan on ""*VALUES*""  (cost=0.00..0.03 rows=2 width=68) (actual time=0.001..0.002 rows=2 loops=1)"
"          Output: ""*VALUES*"".column1, ""*VALUES*"".column2, ""*VALUES*"".column3"
"  ->  Subquery Scan on ""*SELECT*""  (cost=0.06..0.11 rows=2 width=120) (actual time=0.031..0.043 rows=2 loops=1)"
"        Output: nextval('counters_id_seq'::regclass), ""*SELECT*"".action, ""*SELECT*"".cmp_id, NULL::bigint, NULL::bigint, NULL::bigint, ""*SELECT*"".""greatest"", ""*SELECT*"".""?column?"", NULL::text, NULL::integer"
        Buffers: shared hit=2
        ->  Sort  (cost=0.06..0.07 rows=2 width=100) (actual time=0.020..0.021 rows=2 loops=1)
"              Output: ((data.cmp_id)::numeric), data.action, (GREATEST('0'::numeric, data.delta)), 0"
              Sort Key: ((data.cmpid)::numeric), data.action
              Sort Method: quicksort  Memory: 25kB
              ->  CTE Scan on data  (cost=0.00..0.05 rows=2 width=100) (actual time=0.006..0.007 rows=2 loops=1)
"                    Output: (data.cmp_id)::numeric, data.action, GREATEST('0'::numeric, data.delta), 0"
  SubPlan 2
    ->  Limit  (cost=0.00..0.06 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=2)
          Output: ((data_1.delta)::double precision)
          ->  CTE Scan on data data_1  (cost=0.00..0.06 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=2)
                Output: (data_1.delta)::double precision
                Filter: ((data_1.action = excluded.action) AND ((data_1.cmp_id)::bigint = excluded.cmp_id))
                Rows Removed by Filter: 0
Planning:
  Buffers: shared hit=22
Planning Time: 0.227 ms
Execution Time: 3566.434 ms

The table is very small (~250 rows) and the database is quite powerful.

The use case is multiple updates to same rows, but not more than ~150 per second, where each update happens in the context of a transaction, but I try to perform it as close to the transaction end as possible, in order to minimize the locking duration.

There are multiple indexes on the table, but none on the updated column. There are two check constraint on the updated counter columns which is a float8 type:

    CONSTRAINT counters_overflow CHECK (((limitval = (0)::double precision) OR (counter <= (limitval + (0.01)::double precision)))),
    CONSTRAINT counters_underflow CHECK ((counter >= ('-0.01'::numeric)::double precision))

I imagine that since there are many concurrent updates that lock the same row that creates a bottleneck. But I didn't expect the bottleneck to be so narrow - since the updates should in theory be hot updates and I would commit the transaction shortly afterwards.

One thought I've had is that the check constraint or the fact that the UPDATE is inside an ON CONFLICT clause is causing the update be slower?

One possible solution I've had in mind is to change the logic to first try to update and then if not exists try to insert instead. Or maybe change the fillfactor of the table? But I guess that only applies if the updates are not hot.


Checking for hot updates I execute:

SELECT relname, n_tup_ins, n_tup_upd, n_tup_del, n_tup_hot_upd
FROM pg_stat_all_tables
WHERE relname = 'counters';

And get the following output:

table_name  n_tup_ins   n_tup_ups   n_tup_del  n_tup_hot_upd
counters    129         39824183    126        39388687

Solution

  • All the time is spent on the actual data modification. The long duration would suggest a lock. I'd set log_lock_waits = on, then you will get a log message whenever somebody has to wait for a lock for more than one second.