Search code examples
scalaapache-sparkaggregate-functionsuser-defined-functionskernel-density

Re-using SparkContext object in UDAF


I am trying to implement an aggregated version of org.apache.spark.mllib.stat.KernelDensity to estimate the Probably Density Function of multiple distributions concurrently.

The idea is to have a data frame with say 2 columns: one for the name of the group, a second one containing univariate observation values (there will be 1000s of groups, hence the need for concurrent processing).

What I have in mind something like this, (the column pdf would contain an Array with the values of the PDF):

> val getPdf = new PDFGetter(sparkContext)
> df_with_group_and_observation_columns.groupBy("group").agg(getPdf(col("observations")).as("pdf")).show()

I have implemented a User-Defined-Aggrgated-Function to (hopefully) do this. I have 2 issues with the current implementation and am seeking your advice:

  1. Apparently it is not possible to re-use the sparkContext object within the evaluate() function of a UDAF. I am currently getting a java.io.NotSerializableException as soon as the UDAF attempts to access the sparkContext object (see details below). ==> Is this normal? Any ideas on how this can be remediated?
  2. The current implementation of the UDAF I have will get all the observations for each group from the data frame (which is distributed), put them in a Seq() (WrappedArray) and then attempt to run parallelize() on the Seq() of each group to re-distribute the observations. This seems quite inefficient. ==> Is there a way for the UDAF to "give" directly a "sub-RDD" of each group to each of its evaluate() functions during runtime?

Below is a thorough example of what I have so far (don't mind the return value as String instead of Array, I just want to see if can get the Kernel Density to work in a UDAF for now):

Spark context available as 'sc' (master = local[*], app id = local-1514639826952).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

scala> sc.toString
res27: String = org.apache.spark.SparkContext@2a96ed1b

scala> val df = Seq(("a", 1.0), ("a", 1.5), ("a", 2.0), ("a", 1.8), ("a", 1.1), ("a", 1.2), ("a", 1.9), ("a", 1.3), ("a", 1.2), ("a", 1.9), ("b", 10.0), ("b", 20.0), ("b", 11.0), ("b", 18.0), ("b", 13.0), ("b", 16.0), ("b", 15.0), ("b", 12.0), ("b", 18.0), ("b", 11.0)).toDF("group", "val")

scala> val getPdf = new PDFGetter(sc)

scala> df.groupBy("group").agg(getPdf(col("val")).as("pdf")).show()
org.apache.spark.SparkException: Task not serializable
...
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
    - object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@2a96ed1b)
    - field (class: PDFGetter, name: sc, type: class org.apache.spark.SparkContext)
    - object (class PDFGetter, PDFGetter@38649ca3)
...

See the definition of the UDAF below (which otherwise works well):

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.Row
import scala.collection.mutable.WrappedArray
import scala.collection.mutable.{ListBuffer, ArrayBuffer}
import org.apache.spark.mllib.stat.KernelDensity


class PDFGetter(var sc: org.apache.spark.SparkContext) extends UserDefinedAggregateFunction {

  // Define the schema of the input data, 
  // intermediate processing (deals with each individual observation within each group) 
  // and return type of the UDAF
  override def inputSchema: StructType = StructType(StructField("result_dbl", DoubleType) :: Nil)

  override def bufferSchema: StructType = StructType(StructField("observations", ArrayType(DoubleType)) :: Nil)

  override def dataType: DataType = StringType


  // The UDAF will always return the same results
  // given the same inputs
  override def deterministic: Boolean = true


  // How to initialize the intermediate processing buffer
  // for each group
  override def initialize(buffer: MutableAggregationBuffer): Unit = {
    buffer(0) = Array.emptyDoubleArray
  }

  // What to do with each new row within the group
  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    var values = new ListBuffer[Double]()
    values.appendAll(buffer.getAs[List[Double]](0))
    val newValue = input.getDouble(0)
    values.append(newValue)
    buffer.update(0, values)
  }

  // How to merge 2 buffers located on 2 separate
  // executor hosts or JVMs
  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
    var values = new ListBuffer[Double]()
    values ++= buffer1.getAs[List[Double]](0)
    values ++= buffer2.getAs[List[Double]](0)
    buffer1.update(0, values)
  }


  // What to do with the data once intermediate processing
  // is completed
  override def evaluate(buffer: Row): String = {
    // Get the observations
    val observations = buffer.getSeq[Double](0)     // Or val observations = buffer.getAs[Seq[Double]](0)   // Returns a WrappedArray either way
    //observations.toString

    // Calculate the bandwidth
    val nObs = observations.size.toDouble
    val mean = observations.sum / nObs
    val stdDev = Math.sqrt(observations.map(x => Math.pow(x - mean, 2.0) ).sum / nObs)
    val bandwidth = stdDev / 2.5
    //bandwidth.toString


    // Kernel Density
    // From the example at http://spark.apache.org/docs/latest/api/java/index.html#org.apache.spark.sql.Dataset
    // val sample = sc.parallelize(Seq(0.0, 1.0, 4.0, 4.0))
    // val kd = new KernelDensity()
    //      .setSample(sample)
    //        .setBandwidth(3.0)
    // val densities = kd.estimate(Array(-1.0, 2.0, 5.0))

    // Get the observations as an rdd (required by KernelDensity.setSample)
    sc.toString     // <====   This fails
    val observationsRDD = sc.parallelize(observations)

    // Create a new Kernel density object
    // for these observations
    val kd = new KernelDensity()
    kd.setSample(observationsRDD)
    kd.setBandwidth(bandwidth)

    // Create the points at which
    // the PDF will be estimated
    val minObs = observations.min
    val maxObs = observations.max
    val nPoints = Math.min(nObs/2, 1000.0).toInt
    val increment = (maxObs - minObs) / nPoints.toDouble
    val points = new Array[Double](nPoints)
    for( i <- 0 until nPoints){
      points(i) = minObs + i.toDouble * increment;
    }

    // Estimate the PDF and return
    val pdf = kd.estimate(points)
    pdf.toString
  }
}

My apologies for the long post but it feels this one is quite tricky so I figured having all the details would be useful to any helper out there.

Thanks!


Solution

  • It is not going to work. You cannot:

    • Access SparkContext, SparkSession, SQLContext on an executor (where evaluate is called).
    • Access or create distributed data structure on an executor.

    To answer possible follow-up questions - there is no workaround. It is a core design decision, fundamental to Spark's design.