I have setup a PostgreSQL test environment which is required to contain the same amount of data (number of rows) as the production database, and be configured mostly production-like to simulate the same performance for normal transactions.
However, it being a test environment, occasionally has to have some unique, experimental, temporary, or ad-hoc changes applied. For instance, adding or removing some indexes before a performance test, recalculating the value of a column to replicate test conditions, dumping and reimporting whole tables, etc.
Is there a way to temporarily suspend data integrity guarantees in order to perform such types of mass update as fast as possible?
For example, in MySQL you can set over-sized write buffers, disable transaction logging, and suspend disk flushes on transaction commit. Is there something similar in pgsql?
The deployment environment is AWS EC2.
The manual has a chapter dedicated to initial loading of a database.
There are some safe options you can change to make things faster:
minimal
off
off
Then there are some rather unsafe options to make things faster. Unsafe meaning: you can lose all your data if the server crashes - so use at your own risk!
off
off
Again: by changing the two settings above, you risk losing all your data.
To disable WAL you could also set all tables to unlogged