Search code examples
apache-sparkpysparkapache-spark-sqlexplode

Explode map column in Pyspark without losing null values


Is there any elegant way to explode map column in Pyspark 2.2 without loosing null values? Explode_outer was introduced in Pyspark 2.3

The schema of the affected column is:

|-- foo: map (nullable = true)
 |    |-- key: string
 |    |-- value: struct (valueContainsNull = true)
 |    |    |-- first: long (nullable = true)
 |    |    |-- last: long (nullable = true)

I would like to replace empty Map with some dummy values to be able to explode whole dataframe without loosing null values. I have tried something like this, but i get an error:

from pyspark.sql.functions import when, size, col
df = spark.read.parquet("path").select(
        when(size(col("foo")) == 0, {"key": [0, 0]}).alias("bar")
    )

And the error:

Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.when.
: java.lang.RuntimeException: Unsupported literal type class java.util.HashMap {key=[0, 0]}
    at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
    at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
    at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
    at scala.util.Try.getOrElse(Try.scala:79)
    at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
    at org.apache.spark.sql.functions$.typedLit(functions.scala:112)
    at org.apache.spark.sql.functions$.lit(functions.scala:95)
    at org.apache.spark.sql.functions$.when(functions.scala:1256)
    at org.apache.spark.sql.functions.when(functions.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

Solution

  • So I have finally made it. I have replaced empty map with some dummy values and then used explode and drop original column.

    replace_empty_map = udf(lambda x: {"key": [0, 1]} if len(x) == 0 else x, 
                 MapType(StringType(), 
                         StructType(
                             [StructField("first", LongType()), StructField("last", LongType())]
                         )
                        )
                )
    
    df = df.withColumn("foo_replaced",replace_empty_map(df["foo"])).drop("foo")
    df = df.select('*', explode('foo_replaced').alias('foo_key', 'foo_val')).drop("foo_replaced")