I am using the below code to read a tab delimited file into a postgres database
enginestring = cfg.dbuser+":"+cfg.dbpwd+"@"+server.local_bind_host+":"+str(server.local_bind_port)+"/"+cfg.dbname
engine = create_engine('postgresql://' + enginestring)
rows = []
for line in smart_open.smart_open(key):
ln = str(line.decode('utf-8'))
fields = ln.split('\t')
rows.append(fields)
df = pd.DataFrame(rows, columns=cfg.df_colheaders)
print(df.head)
df.to_sql(name=table_name, con=engine, if_exists='append')
The call to print returns the dataframe that I expect (i.e. [798624 rows x 133 columns]) and the call to to_sql doesn't fail, yet in the DB I see only one row of data with the correct columns...(same results if the table has been created before or not)
OK here is an update: