Search code examples
pythonjsondataframepyspark

_corrupt_record error when reading a JSON file into Spark


I've got this JSON file

{
    "a": 1, 
    "b": 2
}

which has been obtained with Python json.dump method. Now, I want to read this file into a DataFrame in Spark, using pyspark. Following documentation, I'm doing this

sc = SparkContext()

sqlc = SQLContext(sc)

df = sqlc.read.json('my_file.json')

print df.show()

The print statement spits out this though:

+---------------+
|_corrupt_record|
+---------------+
|              {|
|       "a": 1, |
|         "b": 2|
|              }|
+---------------+

Anyone knows what's going on and why it is not interpreting the file correctly?


Solution

  • You need to have one json object per row in your input file, see https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrameReader.json.html

    If your json file looks like this it will give you the expected dataframe:

    { "a": 1, "b": 2 }
    { "a": 3, "b": 4 }
    
    ....
    df.show()
    +---+---+
    |  a|  b|
    +---+---+
    |  1|  2|
    |  3|  4|
    +---+---+