Search code examples
scalaapache-sparkcategorical-dataapache-spark-ml

Handling NULL values in Spark StringIndexer


I have a dataset with some categorical string columns and I want to represent them in double type. I used StringIndexer for this convertion and It works but when I tried it in another dataset that has NULL values it gave java.lang.NullPointerException error and did not work.

For better understanding here is my code:

for(col <- cols){
    out_name = col ++ "_"
    var indexer = new StringIndexer().setInputCol(col).setOutputCol(out_name)
    var indexed = indexer.fit(df).transform(df)
    df = (indexed.withColumn(col, indexed(out_name))).drop(out_name)
}

So how can I solve this NULL data problem with StringIndexer?

Or is there any better solution for converting string typed categorical data with NULL values to double?


Solution

  • Spark >= 2.2

    Since Spark 2.2 NULL values can be handled with standard handleInvalid Param:

    import org.apache.spark.ml.feature.StringIndexer
    
    val df = Seq((0, "foo"), (1, "bar"), (2, null)).toDF("id", "label")
    val indexer = new StringIndexer().setInputCol("label")
    

    By default (error) it will throw an exception:

    indexer.fit(df).transform(df).show
    
    org.apache.spark.SparkException: Failed to execute user defined function($anonfun$9: (string) => double)
      at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
    ...
    Caused by: org.apache.spark.SparkException: StringIndexer encountered NULL value. To handle or skip NULLS, try setting StringIndexer.handleInvalid.
      at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$9.apply(StringIndexer.scala:251)
    ...
    

    but configured to skip

    indexer.setHandleInvalid("skip").fit(df).transform(df).show
    
    +---+-----+---------------------------+
    | id|label|strIdx_46a78166054c__output|
    +---+-----+---------------------------+
    |  0|    a|                        0.0|
    |  1|    b|                        1.0|
    +---+-----+---------------------------+
    

    or to keep

    indexer.setHandleInvalid("keep").fit(df).transform(df).show
    
    +---+-----+---------------------------+
    | id|label|strIdx_46a78166054c__output|
    +---+-----+---------------------------+
    |  0|    a|                        0.0|
    |  1|    b|                        1.0|
    |  3| null|                        2.0|
    +---+-----+---------------------------+
    

    Spark < 2.2

    As for now (Spark 1.6.1) this problem hasn't been resolved but there is an opened JIRA (SPARK-11569). Unfortunately it is not easy to find an acceptable behavior. SQL NULL represents a missing / unknown value so any indexing is kind of meaningless.

    Probably the best thing you can do is to use NA actions and either drop:

    df.na.drop("column_to_be_indexed" :: Nil)
    

    or fill:

    df2.na.fill("__HEREBE_DRAGONS__", "column_to_be_indexed" :: Nil)
    

    before you use indexer.