Search code examples
pythonsql-serverpandasstringnan

How to convert a column with missing values to string?


I need to export a dataframe from pandas to Microsoft SQL Server using SQL Alchemy. Many columns are strings, with missing values and with some very long integers, e.g. 999999999999999999999999999999999 . These numbers are some kind of foreign key, so the value itself doesn't mean anything, so I can convert those to strings.

This causes the following error in SQL Alchemy when trying to export to SQL:

OverflowError: int too big to convert

I tried converting to string with astype(str), but then I run into the problem that missing values, identified as nans, are converted into the string 'nan' - so SQL does not see them as nulls but as the string 'nan'.

The only solution I have found is to first convert to str then replace 'nan' with numpy.nan. Is there a better way? This is cumbersome, relatively slow, and as unpythonic as it can get: first I convert everything to string, the conversion turns nulls into strings, so I convert those into NaN, which can be a float only, and I end up with a mixed-type column.

Or do I simply have to suck it up and accept that pandas is dreadful at dealing with missing values?

I have an example below:

import numpy as np, pandas as pd, time

from sqlalchemy import create_engine, MetaData, Table, select
import sqlalchemy as sqlalchemy

start=time.time()
ServerName = "DESKTOP-MRX\SQLEXPRESS"
Database = 'MYDATABASE'
params = '?driver=SQL+Server+Native+Client+11.0'
engine = create_engine('mssql+pyodbc://' + ServerName + '/'+ Database + params, encoding ='latin1' )
conn=engine.connect()

df=pd.DataFrame()
df['mixed']=np.arange(0,9)
df.iloc[0,0]='test'
df['numb']=3.0
df['text']='my string'
df.iloc[0,2]=np.nan
df.iloc[1,2]=999999999999999999999999999999999

df['text']=df['text'].astype(str).replace('nan',np.nan)

df.to_sql('test_df_mixed_types', engine, schema='dbo', if_exists='replace')

Solution

  • Using np.where would certainly be a bit faster compared to replace i.e

    df['text'] = np.where(pd.isnull(df['text']),df['text'],df['text'].astype(str))
    

    Timings :

    %%timeit
    df['text'].astype(str).replace('nan',np.nan)
    1000 loops, best of 3: 536 µs per loop
    
    %%timeit
    np.where(pd.isnull(df['text']),df['text'],df['text'].astype(str))
    1000 loops, best of 3: 274 µs per loop
    
    x = pd.concat([df['text']]*10000)
    %%timeit
    np.where(pd.isnull(x),x,x.astype(str))
    10 loops, best of 3: 28.8 ms per loop
    
    %%timeit
    x.astype(str).replace('nan',np.nan)
    10 loops, best of 3: 33.5 ms per loop