Search code examples
postgresqltransactions

How to get high performance under a large transaction (postgresql)


I have data with amount of 2 millions needed to insert into postgresql. But it has played an low performance. Can I achieve a high-performance inserter by split the large transaction into smaller ones (Actually, I don't want to do this)? or, there is any other wise solutions?


Solution

  • No, the main idea to have it much faster is doing all inserts in one transaction. Multiple transactions, or using no transaction, is much slower.

    And try to use copy, which is even faster: http://www.postgresql.org/docs/9.1/static/sql-copy.html

    If you really have to use inserts, you can also try dropping all indexes on this table, and creating them after loading the data.

    This can be interesting as well: http://www.postgresql.org/docs/9.1/static/populate.html