Search code examples
pythonpandaspandas-to-sql

pandas to_sql only writing first row to db


I am using the below code to read a tab delimited file into a postgres database

enginestring = cfg.dbuser+":"+cfg.dbpwd+"@"+server.local_bind_host+":"+str(server.local_bind_port)+"/"+cfg.dbname
engine = create_engine('postgresql://' + enginestring)

rows = []
for line in smart_open.smart_open(key):
    ln = str(line.decode('utf-8'))
    fields = ln.split('\t')
    rows.append(fields)

df = pd.DataFrame(rows, columns=cfg.df_colheaders)
print(df.head)
df.to_sql(name=table_name, con=engine, if_exists='append')

The call to print returns the dataframe that I expect (i.e. [798624 rows x 133 columns]) and the call to to_sql doesn't fail, yet in the DB I see only one row of data with the correct columns...(same results if the table has been created before or not)


Solution

  • OK here is an update:

    • I solved the single row issue by stripping EOL chars (could see ¶ at the end of the last inserted field)
    • Then I was just getting empty tables so I added chunksize parameter to to_sql - not sure why it didn't fail instead of just proceeding but whatever it's OK now