I am using PostgreSQL DB for my production database and I have recently resized my production server's disk volume.
And i have noticed query on a particular table (~10K records) is extremely slow,
EXPLAIN (analyze, buffers, timing) SELECT count(id) FROM markets_sales WHERE volume>0;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1844791.17..1844791.18 rows=1 width=4) (actual time=79842.776..79842.777 rows=1 loops=1)
Buffers: shared hit=2329 read=1842313
-> Seq Scan on markets_sales (cost=0.00..1844782.68 rows=3399 width=4) (actual time=8139.929..79842.043 rows=6731 loops=1)
Filter: (volume > '0'::double precision)
Rows Removed by Filter: 4523
Buffers: shared hit=2329 read=1842313
Planning time: 0.110 ms
Execution time: 79842.809 ms
But the similar query on another table (~2K records) is perfect.
EXPLAIN ANALYZE SELECT count(id) FROM markets_volume WHERE percent>0;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1368.87..1368.88 rows=1 width=4) (actual time=1.866..1.866 rows=1 loops=1)
-> Seq Scan on markets_volume (cost=0.00..1365.59 rows=1312 width=4) (actual time=0.023..1.751 rows=1313 loops=1)
Filter: (h24_change > '0'::double precision)
Rows Removed by Filter: 1614
Planning time: 0.093 ms
Execution time: 1.903 ms
(6 rows)
The number of buffers (=blocks read from disk) is way too high for only 11254 rows.
So most probably your tables is bloated. This can be rectified using:
vacuum full analyze markets_sales;
Note that the statement will require and exclusive lock on the table (thus blocking any read or write access).