Search code examples
javasqlpostgresql

Does serial data type assign unique values to each thread trying to insert?


I am getting following error when inserting data in Postgres in a multi-threaded env, I am relying of serial data type to assign PK assuming it is thread-safe, I lose data when I get following error, how can ensure that each thread gets unique-id? I am using plain java. i.e java.sql library to perform error:

`org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "pg_class_relname_nsp_index"
  Detail: Key (relname, relnamespace)=(table_id_seq, 2200) already exists.`

I am using the followng query to insert entries in table

"insert into "+ MODEL_TABLE +" (b,c,d) values(?,?,?);";

the following query for creating the table

CREATE_MODEL_TABLE =
            "CREATE TABLE if not exists "+ MODEL_TABLE +"(\n" +
            "    id SERIAL,\n" +
            "    a int,\n" +
            "    b int,\n" +
            "    c int,\n" +
            "    PRIMARY KEY (id,b),\n" +
            "  constraint unique_name_%s unique(b,c)"+
            ") PARTITION BY LIST (b);";

Solution

  • pg_class_relname_nsp_index is the name of a UNIQUE index in the system catalog pg_class, where every table-like object (incl. sequences) is registered. Table + schema name must be unique. Hence the unique index pg_class_relname_nsp_index ON pg_class(relname, relnamespace).

    The error is not raised by inserting into your user table, but during creation of the table. The serial column triggers the creation of an underlying SEQUENCE. See:

    The error really makes no sense. For a serial column, Postgres would avoid using a duplicate sequence name and pick the next free name automatically instead of running into a unique violation. ("table_id_seq1" etc.)

    Even if you try to create a sequence with an occupied name manually, you'd get a different error message.

    Hence I suspect index corruption.

    To fix, run (with the necessary privileges!):

    REINDEX TABLE pg_class;
    

    This should not occur to begin with. I have never seen this error message in my long Postgres life. Failing hardware comes to mind. If you don't have a recent backup, now is the time!