The goal is this: I have a set of values to go into table A
, and a set of values to go into table B
. The values going into B
reference values in A
(via a foreign key), so after inserting the A
values I need to know how to reference them when inserting the B
values. I need this to be as fast as possible.
I made the B
values insert with a bulk copy from:
def bulk_insert_copyfrom(cursor, table_name, field_names, values):
if not values: return
print "bulk copy from prepare..."
str_vals = "\n".join("\t".join(adapt(val).getquoted() for val in cur_vals) for cur_vals in values)
strf = StringIO(str_vals)
print "bulk copy from execute..."
cursor.copy_from(strf, table_name, columns=tuple(field_names))
This was far faster than doing an INSERT VALUES ... RETURNING id
query. I'd like to do the same for the A
values, but I need to know the id
s of the inserted rows.
Is there any way to execute a bulk copy from in this fashion, but to get the id
field (primary key) of the rows that are inserted, such that I know which id
associates with which value
?
If not, what would the best way to accomplish my goal?
EDIT: Sample data on request:
a_val1 = [1, 2, 3]
a_val2 = [4, 5, 6]
a_vals = [a_val1, a_val2]
b_val1 = [a_val2, 5, 6, 7]
b_val2 = [a_val1, 100, 200, 300]
b_val3 = [a_val2, 9, 14, 6]
b_vals = [b_val1, b_val2, b_val3]
I want to insert the a_vals
, then insert the b_vals
, using foreign keys instead of references to the list objects.
Generate the IDs yourself.
At step 2 you probably want to lock the sequence's relation too. If code calls nextval() and stashes that ID somewhere it might be already in use by the time it uses it.
Slightly off-topic fact: there is a "cache" setting that you can set if you have lots of backends doing lots of inserts. That increments the counter in blocks.
http://www.postgresql.org/docs/9.1/static/sql-createsequence.html