Search code examples
postgresqlpartitioning

Postgres table with thousands of partition


I have a postgres table (in postgres12) which is supposed to have thousands of partitions (200k at least) in near future.

Here is how I am creating the parent table:

create table if not exists content (
    key varchar(20) not NULL,
    value json not null default '[]'::json
) PARTITION BY LIST(key)

And then adding any given child tables like:

create table if not exists content_123 PARTITION OF content for VALUES in ('123');

Also I am adding an index on top of the child table for quick access (since I will be accessing the child table directly):

create index if not exists content_123_idx on content_123 using btree(key) 

Here is my question: I have never in the past managed this many partitions in a postgres table so I am just wondering is there any downside of doing what I am doing? Also, (as mentioned above) I will not be querying from the parent table directly, but will read directly from individual child tables.


Solution

  • With these table definitions, the index is completely useless.

    With 200000 partitions, query planning will become intolerably slow, and each SQL statement will need very many locks and open files. This won't work well.

    Lump several keys together into a single partition (then the index might make sense).