Search code examples
postgresqlexplain

PostgreSQL optimization: Sequantial Scan VS Index Scan


I have one interesting case with select on postgres table:

advert (~2.5 million records)
    id serial,
    user_id integer (foreign key),
    ...

Here is my select:

select count(*) from advert where user_id in USER_IDS_ARRAY

And if USER_IDS_ARRAY length <= 100 I have next explain analyze:

Aggregate  (cost=18063.36..18063.37 rows=1 width=0) (actual time=0.362..0.362 rows=1 loops=1)
  ->  Index Only Scan using ix__advert__user_id on advert  (cost=0.55..18048.53 rows=5932 width=0) (actual time=0.030..0.351 rows=213 loops=1)
        Index Cond: (user_id = ANY ('{(...)}'))
        Heap Fetches: 213
Planning time: 0.457 ms
Execution time: 0.392 ms

But when USER_IDS_ARRAY length > 100:

Aggregate  (cost=424012.09..424012.10 rows=1 width=0) (actual time=867.438..867.438 rows=1 loops=1)
  ->  Seq Scan on advert  (cost=0.00..423997.11 rows=5992 width=0) (actual time=0.375..867.345 rows=213 loops=1)
        Filter: (user_id = ANY ('{(...)}'))
        Rows Removed by Filter: 2201318
Planning time: 0.261 ms
Execution time: 867.462 ms

No matter what user_ids in USER_IDS_ARRAY, only it's length matters.

Does anybody have ideas how to optimize this select for more then 100 user_ids?


Solution

  • If SET enable_seqscan = OFF still doesn't force index scan it means the index scan is not possible. It turns out here the index was partial.