Search code examples
apache-sparkazure-databricks

Azure Databricks Spark XML Library - Trying to read xml files


I am trying to create a databricks notebook to read a xml file from Azure Data Lake and convert to parquet. I got the spark-xml library from here - [https://github.com/databricks/spark-xml]. I followed the example provided in the github but not able to get it working.

df = (spark.read.format("xml")
  .option("rootTag","catalog") \
  .option("rowTag", "book") \
  .load("adl://mysandbox.azuredatalakestore.net/Source/catalog.xml"))


  Exception Details:

  java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class

  StackTrace: 

 /databricks/spark/python/pyspark/sql/readwriter.py in load(self, path, 
 format, schema, **options)
  164         self.options(**options)
  165         if isinstance(path, basestring):
  --> 166             return self._df(self._jreader.load(path))
  167         elif path is not None:
  168             if type(path) != list:

  /databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in 
  __call__(self, *args)
  1255         answer = self.gateway_client.send_command(command)
  1256         return_value = get_return_value(
  -> 1257             answer, self.gateway_client, self.target_id, 
  self.name)
  1258 

Are there any other dependencies I need to define for parsing the xml? Appreciate the help.


Solution

  • Phew, Finally got the issue resolved. The error message doesn't give any details of the exception but the issue is with the version difference between the spark-xml library to the scala version of the cluster. I updated the library to match with my cluster version and the problem resolved. Hope it helps someone having the same issue.