I'm having very large table with products. I need to select several products at very high offset (example below). Postgresql manual on indexes and performance suggests to create index on column that's used by ORDER BY + eventual conditions. Everything is peachy, no sort is used. but for high offset values LIMIT is very costly. Anyone have any idea what might be a cause for that?
Following query can run for minutes.
Indexes:
"product_slugs_pkey" PRIMARY KEY, btree (id)
"index_for_listing_by_default_active" btree (priority DESC, name, active)
"index_for_listing_by_name_active" btree (name, active)
"index_for_listing_by_price_active" btree (master_price, active)
"product_slugs_product_id" btree (product_id)
EXPLAIN SELECT * FROM "product_slugs" WHERE ("product_slugs"."active" = 1) ORDER BY product_slugs.name ASC LIMIT 10 OFFSET 14859;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Limit (cost=26571.55..26589.43 rows=10 width=1433)
-> Index Scan using index_for_listing_by_name_active on product_slugs (cost=0.00..290770.61 rows=162601 width=1433)
Index Cond: (active = 1)
(3 rows)
The index_for_listing_by_name_active
index you have here isn't going to help much, since the products in the result set aren't necessarily going to be contiguous in the index. Try creating a conditional index by name on only those products which are active:
CREATE INDEX index_for_listing_active_by_name
ON product_slugs (name)
WHERE product_slugs.active = 1;