I'm trying to load Salesforce data to Azure SQL Database incrementally by launching a Python script on Azure Databricks.
Since I'm not able to install Devart ODBC in Azure Databricks, I'm trying to use simple_salesforce to get data from salesforce:
import pandas as pd
import pyodbc
from simple_salesforce import Salesforce, SalesforceLogin, SFType
from sqlalchemy.types import Integer, Text, String, DateTime
from sqlalchemy import create_engine
import urllib
sf = Salesforce(password = password, username=username, security_token=jeton)
rep_qr = "SELECT SOMETHING FROM Account WHERE CONDITION"
soql = prep_qr.format(','.join(field_names))
results = sf.query_all(soql)['records']
I get the following result (an example):
[OrderedDict([('attributes', OrderedDict([('type', 'Account'), ('url', '/services/data/v42.0/sobjects/Account/0014K000009aoU3QAI')])), ('Id', XY1), (Name, Y), (Date, 2020-11-24T09:16:17.000+0000)])]
Then I converted the output to a pandas Dataframe:
results = pd.DataFrame(sf.query_all(soql)['records'])
results.drop(columns=['attributes'], inplace=True) #to keep only the columns
I got something like this (just an example):
Id | Name | Date |
---|---|---|
XY1 | Y | 2020-11-24T09:16:17.000+0000 |
In order to ingest this data into Azure SQL Database I have used "sqlalchemy" to convert the Dataframe into sql, then pyodbc will take in charge the insertion part into the destination (Azure SQL Database), as shown bellow:
df = pd.DataFrame(results)
df.reset_index(drop=True, inplace=True) #just to remove the index from dataframe
#Creating the engine from and pyodbc which is connected to Azure SQL Database:
params = urllib.parse.quote_plus \
(r'DRIVER={ODBC Driver 17 for SQL Server};SERVER=' + server + ';DATABASE=' + database + ';UID=' + username + ';PWD=' + password)
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine_azure = create_engine(conn_str, echo=True)
df.to_sql('account',engine_azure,if_exists='append', index=False)
But I get the following error:
sqlalchemy.exc.DataError: (pyodbc.DataError) ('22007', '[22007] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Conversion failed when converting date and/or time from character string. (241) (SQLExecDirectW)')
I think the problem is the library simple_salesforce brigns the date/time in this format:
2020-11-24T09:16:17.000+0000
But in Azure SQL Database it should be something like this:
2020-11-24T09:16:17.000
The problem here is I'm loading the tables dynamically (I don't even know the tables nor the columns that I'm loading) the reason why I can't cast these data type, I need a way to pass datatype to pyodbc automatically.
What can you recommend please ?
Thanks,
If the date/time values are consistently returned as strings of the form 2020-11-24T11:22:33.000+0000
then you can use pandas' .apply()
method to convert the strings to the 2020-11-24 11:22:33.000
format that SQL Server will accept:
df = pd.DataFrame(
[
(1, "2020-11-24T11:22:33.000+0000"),
(2, None),
(3, "2020-11-24T12:13:14.000+0000"),
],
columns=["id", "dtm"],
)
print(df)
"""console output:
id dtm
0 1 2020-11-24T11:22:33.000+0000
1 2 None
2 3 2020-11-24T12:13:14.000+0000
"""
df["dtm"] = df["dtm"].apply(lambda x: x[:23].replace("T", " ") if x else None)
print(df)
"""console output:
id dtm
0 1 2020-11-24 11:22:33.000
1 2 None
2 3 2020-11-24 12:13:14.000
"""
df.to_sql(
table_name,
engine,
index=False,
if_exists="append",
)
with engine.begin() as conn:
pprint(conn.execute(sa.text(f"SELECT * FROM {table_name}")).fetchall())
"""console output:
[(1, datetime.datetime(2020, 11, 24, 11, 22, 33)),
(2, None),
(3, datetime.datetime(2020, 11, 24, 12, 13, 14))]
"""