The way we handled this with Postgres 8.4 was to manually update the pg_attribute
table:
LOCK TABLE pg_attribute IN EXCLUSIVE MODE;
UPDATE pg_attribute SET atttypmod = 104
WHERE attrelid = 'table_name'::regclass AND
attname = 'column_name';
column_name
was a varchar(50)
and we wanted a varchar(100)
, but the table was too enormous (tens of millions of rows) and too heavily used to rewrite.
The content and answers around this topic were sparse and outdated for such a (at least anecdotally) common problem.
But, after seeing hints that this might be the case on at least 3 discussions, I've come to think that with newer versions of Postgres (we're on 9.6), you are now able to run the following:
ALTER TABLE 'table_name' ALTER COLUMN 'column_name' TYPE varchar(100);
...without rewriting the table.
Is this correct?
If so, do you know where some definitive info on the topic exists in the Postgres docs?
That ALTER TABLE
will not require a rewrite.
The documentation says:
Adding a column with a
DEFAULT
clause or changing the type of an existing column will require the entire table and its indexes to be rewritten.
It is very simple to test:
Try with an empty table and see if the relfilenode
column in the pg_class
row for the table changes:
SELECT relfilenode FROM pg_class
WHERE relname = 'table_name';
Reading on in the documentation, you see:
As an exception, when changing the type of an existing column, if the
USING
clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt.
Since varchar(50)
is clearly binary coercible to varchar(100)
, your case will not require a table rewrite, as the above test should confirm.