My goal is to efficiently import large amounts of data into a Postgres Database. In principle, the raw data could be compressed by a factor of ~20 (e.g. using gzip).
The COPY
statement seems to be the best option for a bulk import.
Apart from sslcompression (which is applied after the data is encrypted), is there a way to compress the actual data (content) transferred between client and server, or is that even built-in by default?
Many thanks.
(Should not matter, but I am using golang).
If your bottleneck is network throughput, you will want to send (copy or stream) compressed data to the database machine using something like scp or ssh, then run COPY in a client on the same machine as the database server is running. There are a number of ways this can be orchestrated, but all of them have something other than libpq/PostgreSQL as the conductor of the orchestra.