Is it possible to insert into a table from a json dictionary in PostgreSQL without specifying the columns?
Given there is a table books_1:
CREATE TABLE books_1 ( data json );
INSERT INTO books VALUES ('{ "name": "Book the First", "author": "Bob White" }');
INSERT INTO books VALUES ('{ "name": "Book the Second", "author": "Charles Xavier" }');
INSERT INTO books VALUES ('{ "name": "Book the Third", "author": "Jim Brown" }');
Which I want to insert into a table that has the exact same columns and datatypes, such as:
CREATE TABLE books_2 ( name varchar(100), author varchar(100) );
Ideally I would execute:
INSERT INTO books_2
SELECT something_json(data) FROM books_1
Background: Using standard PostgreSQL I can easily make JSON table dumps. I am also able to import them, but they are imported as jsonb.
COPY ( SELECT ROW_TO_JSON(t) FROM (SELECT * FROM books_2) t) TO 'C:\Users\Public\books.json';
CREATE TABLE books_1 ( j jsonb );
COPY t FROM 'C:\Users\Public\books.json';
I want to re-import the json into 'regular' attributes, not json. I could do this by specifiying all the columns, but since there are a lot of tables and columns I really don't want to do that. There should be no need as the columns are the same anyway.
I'm probably looking for a function that returns a list of columns based on the keys in the json. I guess the order would have to be the same. But since the export is also done from PostgreSQL that is probably guaranteed.
You can use json_populate_record
with the rowtype of the target table:
INSERT INTO books_2
SELECT rec.* FROM books_1, json_populate_record(null::books_2, books_1.data) rec;