Search code examples
scalaapache-sparkapache-zeppelin

How to run Spark on Zeppelin to analyze xml files


I am able to run Spark shell by bin/spark-shell --packages com.databricks:spark-xml_2.11:0.3.0 to analize xml files, for example:

import org.apache.spark.sql.SQLContext

val sqlContext = new SQLContext(sc)
val df = sqlContext.read
    .format("com.databricks.spark.xml")
    .option("rowTag", "book")
    .load("books.xml")

but how can I run Zeppelin to do it so. Does Zeppelin need some parameter at start to import com.databricks.spark.xml? Now I am getting:

java.lang.RuntimeException: Failed to load class for data source: com.databricks.spark.xml at scala.sys.package$.error(package.scala:27) at org.apache.spark.sql.sources.ResolvedDataSource$.lookupDataSource(ddl.scala:220) at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:233) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:104) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:26) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:31) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:33) at $iwC$$iwC$$iwC$$iwC$$iwC.(:35) at $iwC$$iwC$$iwC$$iwC.(:37) at $iwC$$iwC$$iwC.(:39) at $iwC$$iwC.(:41) at $iwC.(:43) at (:45) at .(:49) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:709) at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:674) at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:667) at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300) at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)


Solution

  • In Zeppelin, you need to call for those dependencies before creating the SparkContext.

    In a separate cell you add and run the following

    %dep
    z.reset()
    z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")
    z.load("com.databricks:spark-xml_2.11:0.3.0")
    

    If this gives you an error from the type : "You have to add dependencies before starting your SparkContext" just restart the interpreter or Zeppelin.