I'm attempting to update a table in a PostgreSQL database that has ~2.3 million rows. We also have an event trigger associated with this table which is supposed to run a microservice to perform further calculations whenever a row is updated/inserted/deleted.
As expected, the first time I updated the table, this led to the creation of over 2 million pending events. At the rate of a few thousand events cleared an hour, I don't have the option to wait for all events to be processed.
I'm looking to update the data in the table without the event trigger creating any pending events. Things I've tried:
deleting the event trigger, updating the table and then re-creating the event trigger. While we didn't have any pending events at first, all of them reappeared as soon as the event trigger was recreated.
manipulating the table storing the event logs itself to manually delete all pending events created in the last 2 days (following the Hasura docs here).
DELETE FROM hdb_catalog.event_invocation_logs
WHERE event_id IN (
SELECT id FROM hdb_catalog.event_log
WHERE trigger_name = 'my_trigger_name'
AND delivered = false
AND created_at > now() - interval '2 days');
The above would only delete few tens of events each time, and then finish running for some reason.
DELETE FROM hdb_catalog.event_invocation_logs;
DELETE FROM hdb_catalog.event_log;
Any help is appreciated, thanks.
How are you actually executing the update statement which is touching 2.3 million rows? Are you just running it using SQL directly?
If so, you can wrap your statements like so:
SET session_replication_role = replica;
UPDATE table SET thing = 'whatever';
SET session_replication_role = DEFAULT;
Triggers do not execute when in replica mode