During analyse of foreign table I see a lot of postgres_fdw
fetches on foreign server. For big tables it fetches all of it, even though it gives me 300 records in sample:
INFO: "test_table": table contains 59280698 rows, 300 rows in sample
I thought it will only send data that are used in statictic samples but it looks like the whole table is being sent to the requestor. I have tried to change the default_statistic_target
for each column:
ALTER TABLE test_table
ALTER COLUMN id SET STATISTICS 1
But it does not look to have any difference in fetches.
Is there any option to reduce network data I/O during analyse of foreign postgresql tables?
PS. I do not want to turn on use_remote_estimates
. In 9.6.9
it corrupts the shared memory, in 9.6.17
it gives error on some queries.
ANALYZE
on a foreign PostgreSQL table will run
SELECT * ON atable;
on the remote table. The sample is created by applying Vitter's algorithm to the result.
It might be an interesting enhancement to use TABLESAMPLE
to retrieve only part of the table, but that is not implemented yet.