Search code examples
pythonpython-3.xazure-databricks

.saveAsTable stopping sparkContext


When running a program, I am trying to save a spark df as a delta table. However, I am running into an error where the sparkContext is stopped when I use .saveAsTable. I am running a for loop to run multiple queries and write them to delta tables, but there is a specific query that doesn't like to be converted to the delta table. Here is the error:

An error occurred while calling o4168.saveAsTable.
: org.apache.spark.SparkException: Job aborted.

#lots of at statements

Caused by: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
org.apache.livy.rsc.driver.SparkEntries.sc(SparkEntries.java:52)
org.apache.livy.rsc.driver.SparkEntries.sparkSession(SparkEntries.java:66)
org.apache.livy.repl.AbstractSparkInterpreter.postStart(AbstractSparkInterpreter.scala:144)
org.apache.livy.repl.SparkInterpreter.$anonfun$start$1(SparkInterpreter.scala:138)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.livy.repl.AbstractSparkInterpreter.restoreContextClassLoader(AbstractSparkInterpreter.scala:495)
org.apache.livy.repl.SparkInterpreter.start(SparkInterpreter.scala:113)
org.apache.livy.repl.Session.$anonfun$start$1(Session.scala:283)
scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
scala.util.Success.$anonfun$map$1(Try.scala:255)
scala.util.Success.map(Try.scala:213)
scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:750)

The currently active sparkContext was created at:

org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
org.apache.livy.rsc.driver.SparkEntries.sc(SparkEntries.java:52)
org.apache.livy.rsc.driver.SparkEntries.sparkSession(SparkEntries.java:66)
org.apache.livy.repl.AbstractSparkInterpreter.postStart(AbstractSparkInterpreter.scala:144)
org.apache.livy.repl.SparkInterpreter.$anonfun$start$1(SparkInterpreter.scala:138)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.livy.repl.AbstractSparkInterpreter.restoreContextClassLoader(AbstractSparkInterpreter.scala:495)
org.apache.livy.repl.SparkInterpreter.start(SparkInterpreter.scala:113)
org.apache.livy.repl.Session.$anonfun$start$1(Session.scala:283)
scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
scala.util.Success.$anonfun$map$1(Try.scala:255)
scala.util.Success.map(Try.scala:213)
scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:750)

         #more at statements

Any help is appreciated

UPDATE: Here is my save statement:

spark_df.write.mode("overwrite").format("delta").saveAsTable(tablename)

Error is at .saveAsTable


Solution

  • Just figured it out, switched the troubled query to a new notebook and ran just fine. Odd!