Search code examples
databasepostgresqlindexingrdbms

Change to Postgres indexing scheme for IS NOT NULL?


I have a Postgres table that has numerous columns which frequently show up in the where clause of select queries. The table has been indexed accordingly, with indexes on all of these columns (mostly single-column indexes, but with some composite indexes thrown in). However, there is one new kind of query that this indexing isn't fully supporting: queries with a deleted_at is null condition in the where clause (we soft-delete records using this column). Some queries with this are running very slowly despite all of the other columns they use being indexed. Naturally, I want to find a way to improve these queries with a change to our indexing.

An example would be:

select count(distinct user_id)
from my_table
where group_id = '123' and deleted_at is null

In this example, both user_id and group_id are indexed. Without and deleted_at is null, the query runs quickly. With it, slowly.

I have four competing solutions. I plan on testing them, but I really want to know if any old hands are able to look at this situation and have a simple explanation for why one should be expected to perform better than the other. I'm just getting the hang of thinking about indexing after being spoiled by Snowflake for years so I'm really looking for how one would reason about this.

My solutions:

  1. An index on the expression (docs) deleted_at is null. Basically: CREATE INDEX deleted_at_is_null ON my_table ((deleted_at is null));. This is the simplest solution. It's just one more index, and one with a clear purpose. I'm not sure, though, if it should actually be expected to help in queries where we have other indexed columns in the where clause! Can Postgres use them separately or do they need to be composite?
  2. Replace each of the current indexes (like the ones on user_id and group_id above) with composite indexes on that column, plus deleted_at is null.
  3. Same as 2, but instead of replacing the indexes, add the composite indexes in addition to the currently-existing indexes. This feels wrong and redundant, but I am not sure.
  4. Add a new partial index for each of the currently-existing indexes with a where deleted_at is not null condition. Like number 3, this feels like too many indexes.

I'm assuming that an index on deleted_at itself is overkill since I never need to query for specific ranges/values of it - only for whether it is null. Please correct me if I am wrong, though!

One other thing to note is that the vast majority of the records have null deleted_at.

Any help would be much appreciated! Just looking for some intuition and best practices around this problem.


Solution

  • You can see these tested on db<>fiddle here.

    1. Adding just a plain index on my_table((deleted_at is null)) would likely speed up queries using only that exact expression - you are not wrong about that. It's cheaper to use the info from another one it just scanned to jump to heap and check deleted_at there directly, rather than search this index as an addition, intersect which parts of the heap they both point at, and only then do the jump.
    2. Adding (deleted_at is null) to your other indexes is simple, cheap and effective. It matters whether you place it on the first or last position, though. If it leads, you might get just an index scan, but a more thorough one. If it follows, you'll get a bitmap index scan with a re-check on the heap, but with high enough selectivity, it might finish faster, especially since you need the user_id from the heap anyways.
    3. Duplicating the indexes just so that each one has its expanded version added separately would only make sense if this addition weighed those one-column indices down significantly, which it does not. It's just one more junction on top/bottom of the tree and other queries that don't care about deleted_at will not mind that it's there.
    4. Adding the predicate indexes without removing the regular ones does make sense because queries that mismatch the predicate disqualify that index, so keeping the regulars offers something to fall back on.

    As an addition to your list:

    1. Replace the indexes you have with their partial versions, if your queries always want the (deleted_at is null). If you rarely ever query for deleted_at is not null and/or you can afford a seq scan in those cases, you can stop there. Otherwise, it could make sense to replace the base index with one that uses the opposite predicate - that way you're covered in either case. That means setting both of these up:

      create index idx4_a on my_table(group_id)where(deleted_at is null);
      create index idx4_b on my_table(group_id)where(deleted_at is not null);
      

      A silly quirk of that is when you're not specifying whether deleted_at should be null or not and you don't have a non-predicate index to fall back on, you should actually specify it could be either, so this:

      (deleted_at is not null or deleted_at is null)
      

      rather than omitting that entirely, even though these conditions cancel each other out. The reason is that the planner is matching your conditions against the predicates so listing them convinces the planner it can run a BitmapOr scanning both. Omitting the column causes a seq scan instead if it's missing the base index at that point.
      The two indexes combined are as small as the base index was and they can cover all 3 cases (null, not null, unspecified), although the last one's a bit inconvenient and slightly slower than the base one.

    2. Don't add (deleted_at is null) as either a key column or a partial index predicate but rather strap deleted_at on as payload using include.

    The former is a missing combination of those you already established, the latter could work sort of against what the documentation clearly says about non-key column inclusion:

    A non-key column cannot be used in an index scan search qualification.

    And it is not used in the qualification, but it is used in the scan, speeding things up by saving a whole subsequent heap scan. If you just add deleted_at, Postgres still prefers a plain index on group_id, then a re-check on the heap because it needs to consult both deleted_at as well as user_id it's looking for.
    If you add both as payload:

    create index on my_table(group_id)include(user_id,deleted_at);
    

    Everything is in the index. Now Postgres sees deleted_at is in the index it's already using, so both the output and the filter can re-use that:
    demo at db<>fiddle

    QUERY PLAN
    Aggregate (cost=4.50..4.51 rows=1 width=8) (actual time=0.071..0.072 rows=1 loops=1)
    Output: count(DISTINCT user_id)
    -> Index Only Scan using idx2 on public.my_table (cost=0.42..4.49 rows=1 width=4) (actual time=0.055..0.057 rows=2 loops=1)
    Output: group_id, user_id, deleted_at
    Index Cond: (my_table.group_id = 123)
    Filter: (my_table.deleted_at IS NULL)
    Rows Removed by Filter: 2
    Heap Fetches: 0
    Planning Time: 0.108 ms
    Execution Time: 0.095 ms

    That's on 100k random group_id's and user_id's spread over 300k rows with 20% deleted_at IS NULL.


    1. While an inert column in the index simplifies its traversal compared to adding stuff as additional key columns, include doesn't support expressions, so it might actually get larger than a version with an expression on second position. For not- null the whole timestamp gets pulled in there.
    2. This sort of configuration only works if you set up these indexes to contain everything you're querying, making them covering indexes. This might be a problem with many, especially wide columns, or even a few columns if they're wide enough.
    3. You might want to look for more combinations of columns that are queried together or can at least re-use the same index. It's even possible to use partials to speed up statements that reference the opposite of the predicate and none of its indexed columns.
    4. The tests I did are only a simple reference. You should run pgbench on something mirroring your actual cluster config and data characteristics.