Search code examples
sql-serverpandaspyodbcexecute

Pandas dataframe insert into SQL Server taking too long with execute and executemany


I have a pandas dataframe with 27 columns and ~45k rows that I need to insert into a SQL Server table.

I am currently using with the below code and it takes 90 mins to insert:

conn = pyodbc.connect('Driver={ODBC Driver 17 for SQL Server};\
                   Server=@servername;\
                   Database=dbtest;\
                   Trusted_Connection=yes;')
cursor = conn.cursor()  #Create cursor



 for index, row in t6.iterrows():

    cursor.execute("insert into dbtest.dbo.test( col1, col2, col3, col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,,col27)\
                                                        values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
                                                        row['col1'],row['col2'], row['col3'],,row['col27'])

I have also tried to load using executemany and that takes even longer to complete, at nearly 120mins.

I am really looking for a faster load time since I need to run this daily.


Solution

  • You can set fast_executemany in pyodbc itself for versions>=4.0.19. It is off by default.

    import pyodbc
    
    server_name = 'localhost'
    database_name = 'AdventureWorks2019'
    table_name = 'MyTable'
    driver = 'ODBC Driver 17 for SQL Server'
    
    connection = pyodbc.connect(driver='{'+driver+'}', server=server_name, database=database_name, trusted_connection='yes') 
    
    cursor = connection.cursor()
    
    cursor.fast_executemany = True   # reduce number of calls to server on inserts
    
    # form SQL statement
    columns = ", ".join(df.columns)
    
    values = '('+', '.join(['?']*len(df.columns))+')'
          
    statement = "INSERT INTO "+table_name+" ("+columns+") VALUES "+values
    
    # extract values from DataFrame into list of tuples
    insert = [tuple(x) for x in df.values]
    
    cursor.executemany(statement, insert)
    

    Or if you prefer sqlalchemy and dataframes directly.

    import sqlalchemy as db
    
    engine = db.create_engine('mssql+pyodbc://@'+server_name+'/'+database_name+'?trusted_connection=yes&driver='+driver, fast_executemany=True)
    
    df.to_sql(table_name, engine, if_exists='append', index=False)
    
    

    See fast_executemany in this link.

    https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API