I am trying to remove duplicated data from some of our databases based upon unique id's. All deleted data should be stored in a separate table for auditing purposes. Since it concerns quite some databases and different schemas and tables I wanted to start using variables to reduce chance of errors and the amount of work it will take me.
This is the best example query I could think off, but it doesn't work:
do $$
declare @source_schema varchar := 'my_source_schema';
declare @source_table varchar := 'my_source_table';
declare @target_table varchar := 'my_target_schema' || source_table || '_duplicates'; --target schema and appendix are always the same, source_table is a variable input.
declare @unique_keys varchar := ('1', '2', '3')
begin
select into @target_table
from @source_schema.@source_table
where id in (@unique_keys);
delete from @source_schema.@source_table where export_id in (@unique_keys);
end ;
$$;
The query syntax works with hard-coded values.
Most of the times my variables are perceived as columns or not recognized at all. :(
As has been commented, you need some kind of dynamic SQL. In a FUNCTION
, PROCEDURE
or a DO
statement to do it on the server.
You should be comfortable with PL/pgSQL. Dynamic SQL is no beginners' toy.
Example with a PROCEDURE
, like Edouard already suggested. You'll need a FUNCTION
instead to wrap it in an outer transaction (like you very well might). See:
CREATE OR REPLACE PROCEDURE pg_temp.f_archive_dupes(_source_schema text, _source_table text, _unique_keys int[], OUT _row_count int)
LANGUAGE plpgsql AS
$proc$
-- target schema and appendix are always the same, source_table is a variable input
DECLARE
_target_schema CONSTANT text := 's2'; -- hardcoded
_target_table text := _source_table || '_duplicates';
_sql text := format(
'WITH del AS (
DELETE FROM %I.%I
WHERE id = ANY($1)
RETURNING *
)
INSERT INTO %I.%I TABLE del', _source_schema, _source_table
, _target_schema, _target_table);
BEGIN
RAISE NOTICE '%', _sql; -- debug
EXECUTE _sql USING _unique_keys; -- execute
GET DIAGNOSTICS _row_count = ROW_COUNT;
END
$proc$;
Call:
CALL pg_temp.f_archive_dupes('s1', 't1', '{1, 3}', 0);
db<>fiddle here
I made the procedure temporary, since I assume you don't need to keep it permanently. Create it once per database. See:
Passed schema and table names are case-sensitive strings! (Unlike unquoted identifiers in plain SQL.) Either way, be wary of SQL-injection when concatenating SQL dynamically. See:
Made _unique_keys
type int[]
(array of integer
) since your sample values look like integers. Use a the actual data type of your id
columns!
The variable _sql
holds the query string, so it can easily be debugged before actually executing. Using RAISE NOTICE '%', _sql;
for that purpose.
I suggest to comment the EXECUTE
line until you are sure.
I made the PROCEDURE
return the number of processed rows. You didn't ask for that, but it's typically convenient. At hardly any cost. See:
Last, but not least, use DELETE ... RETURNING *
in a data-modifying CTE. Since that has to find rows only once it comes at about half the cost of separate SELECT
and DELETE
. And it's perfectly safe. If anything goes wrong, the whole transaction is rolled back anyway.
Two separate commands can also run into concurrency issues or race conditions which are ruled out this way, as DELETE
implicitly locks the rows to delete. Example:
Or you can build the statements in a client program. Like psql, and use \gexec
. Example: