Search code examples
scalaapache-sparkapache-spark-sqldistributed-computing

Flattening Rows in Spark


I am doing some testing for spark using scala. We usually read json files which needs to be manipulated like the following example:

test.json:

{"a":1,"b":[2,3]}
val test = sqlContext.read.json("test.json")

How can I convert it to the following format:

{"a":1,"b":2}
{"a":1,"b":3}

Solution

  • You can use explode function:

    scala> import org.apache.spark.sql.functions.explode
    import org.apache.spark.sql.functions.explode
    
    
    scala> val test = sqlContext.read.json(sc.parallelize(Seq("""{"a":1,"b":[2,3]}""")))
    test: org.apache.spark.sql.DataFrame = [a: bigint, b: array<bigint>]
    
    scala> test.printSchema
    root
     |-- a: long (nullable = true)
     |-- b: array (nullable = true)
     |    |-- element: long (containsNull = true)
    
    scala> val flattened = test.withColumn("b", explode($"b"))
    flattened: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]
    
    scala> flattened.printSchema
    root
     |-- a: long (nullable = true)
     |-- b: long (nullable = true)
    
    scala> flattened.show
    +---+---+
    |  a|  b|
    +---+---+
    |  1|  2|
    |  1|  3|
    +---+---+