Search code examples
azureazure-databricksdelta-lakedatabricks-autoloader

Databricks: I met with an issue when I was trying to use autoloader to read json files from Azure ADLS Gen2


I met with an issue when I was trying to use autoloader to read json files from Azure ADLS Gen2. I am getting this issue for specific files only. I checked the file are good and not corrupted.

Following is the issue:

Caused by: java.lang.IllegalArgumentException: ***requirement failed: Literal must have a corresponding value to string, but class Integer found.***
    at scala.Predef$.require(Predef.scala:281)
    at at ***com.databricks.sql.io.FileReadException: Error while reading file /mnt/Source/kafka/customer_raw/filtered_data/year=2022/month=11/day=9/hour=15/part-00000-31413bcf-0a8f-480f-8d45-6970f4c4c9f7.c000.json.***
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.logFileNameAndThrow(FileScanRDD.scala:598)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:422)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(null:-1)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
java.lang.IllegalArgumentException: requirement failed: Literal must have a corresponding value to string, but class Integer found.
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.sql.catalyst.expressions.Literal$.validateLiteralValue(literals.scala:274)
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.sat java.lang.Thread.run(Thread.java:750)

I am using Delta Live Pipeline. Here is the code:

@dlt.table(name = tablename,
    comment = "Create Bronze Table",
    table_properties={
        "quality": "bronze"
    }
)
def Bronze_Table_Create():
    return
            spark
            .readStream
            .schema(schemapath)
            .format("cloudFiles")
            .option("cloudFiles.format","json)
            .option("cloudFile.schemaLocation, schemalocation)
            .option("cloudFiles.inferColumnTypes", "false")
            .option("cloudFiles.schemaEvolutionMode", "rescue")
            .load(sourcelocation

Solution

  • I got the issue resolved. The issues was by mistake we have duplicate columns in the schema files. Because of that it was showing that error. However, the error is totally mis-leading, that's why didn't able to rectify it.