Search code examples
postgresqlpysparkaws-glueuuid

AWS Glue failed to insert UUID into Postgres DB


I created a table in Aurora Postgres DB with one UUID column id, and created a AWS Glue Studio Job with the following code:

schema = ['id']
rdd = [[str(uuid.uuid4())]]
dyf = glueContext.create_dynamic_frame_from_rdd(rdd, 'dyf', schema=schema)
glueContext.write_from_options(frame_or_dfc=dyf, connection_type='postgresql', connection_options={...})

An error was reported:

2023-01-05 20:27:35,757 INFO [task-result-getter-0] scheduler.TaskSetManager (Logging.scala:logInfo(57)): Lost task 35.1 in stage 0.0 (TID 36) on 10.248.10.50, executor 1: java.sql.BatchUpdateException (Batch entry 0 INSERT INTO "data"."t" ("id") VALUES ('6f2ac9cd-c6a9-4798-bc9b-59c8a3d37ca1') was aborted: ERROR: column "id" is of type uuid but expression is of type character varying Hint: You will need to rewrite or cast the expression.

I tried rdd = [[uuid.uuid4()]], but it seems Spark doesn't support uuid.

RecursionError: maximum recursion depth exceeded while calling a Python object

INSERT INTO "data"."t" ("id") VALUES ('6f2ac9cd-c6a9-4798-bc9b-59c8a3d37ca1')

The weird thing is I can run the SQL statement successfully from a SQL client. Is it a bug in AWS Glue?


Solution

  • Problem solved by adding an item to the connection_options

    connection_options = {
        ...
        'stringtype': 'unspecified',
    }
    

    But it still doesn't support inserting NULL values into UUID columns. A workaround would be inserting 00000000-0000-0000-0000-000000000000 instead.