This must be a simple one but I'm stuck on it since quite some time.
I'm trying to pass parameters to my Insert script and the output for this is returning NULL. What am I doing wrong here? I'm writing this on Azure Databricks and this is a Python notebook.
spark.sql("CREATE TABLE IF NOT EXISTS DB.RUN_LOG (RunId INT, CreatedDate timestamp, Status string, ErrorDetail string)")
dfMaxRunID = spark.sql("select COALESCE(MAX(RunId),0) MaxRunId from DB.RUN_LOG")
vMaxRunId = dfMaxRunID.first()['MaxRunId']
vInsertRunId = vMaxRunId + 1
vFinal_CurrentTimeStamp = '2019-07-24 12:02:41'
print(vMaxRunId)
print(vInsertRunId)
print(vFinal_CurrentTimeStamp)
spark.sql("INSERT INTO TABLE DB.RUN_LOG values('vInsertRunId','vFinal_CurrentTimeStamp',null,null)")
spark.sql("SELECT * FROM DB.RUN_LOG").show()
Replace your insert statement below as:
>>> spark.sql("INSERT INTO TABLE DB.RUN_LOG values(%s,'%s','%s','%s')"%(vInsertRunId,vFinal_CurrentTimeStamp,'null','null'))
DataFrame[]
>>> spark.sql("SELECT * FROM DB.RUN_LOG").show()
+-----+-------------------+------+-----------+
|RunId| CreatedDate|Status|ErrorDetail|
+-----+-------------------+------+-----------+
| 1|2019-07-24 12:02:41| null| null|
+-----+-------------------+------+-----------+
hive> select * from test_dev_db.RUN_LOG;
OK
1 2019-07-24 12:02:41 null null
Time taken: 0.217 seconds, Fetched: 1 row(s)
Just checked- you need nulls in last two columns. so right statement would be as:
spark.sql("INSERT INTO TABLE db.RUN_LOG values(%s,'%s',null,null)"%(vInsertRunId,vFinal_CurrentTimeStamp))
>>> spark.sql("SELECT * FROM db.RUN_LOG").show()
+-----+-------------------+------+-----------+
|RunId| CreatedDate|Status|ErrorDetail|
+-----+-------------------+------+-----------+
| 1|2019-07-24 12:02:41| null| null|
+-----+-------------------+------+-----------+
hive> select * from test_dev_db.RUN_LOG;
OK
1 2019-07-24 12:02:41 NULL NULL