Search code examples
apache-sparkapache-kafkaspark-streamingsparse-matrixapache-spark-mllib

Create a DataFrame in Spark Stream


I've connected the Kafka Stream to the Spark. As well as I've trained Apache Spark Mlib model to prediction based on a streamed text. My problem is, get a prediction I need to pass a DataFramework.

//kafka stream    
val stream = KafkaUtils.createDirectStream[String, String](
          ssc,
          PreferConsistent,
          Subscribe[String, String](topics, kafkaParams)
        )
//load mlib model
val model = PipelineModel.load(modelPath)
 stream.foreachRDD { rdd =>

      rdd.foreach { record =>
       //to get a prediction need to pass DF
       val toPredict = spark.createDataFrame(Seq(
          (1L, record.value())
        )).toDF("id", "review")
        val prediction = model.transform(test)
      }
}

My problem is, Spark streaming doesn't allow to create a DataFrame. Is there any way to do that? Can I use case class or struct?


Solution

  • It's possible to create a DataFrame or Dataset from an RDD as you would in core Spark. To do that, we need to apply a schema. Within the foreachRDD we can then transform the resulting RDD into a DataFrame that can be further used with an ML pipeline.

    // we use a schema in the form of a case class
    case class MyStructure(field:type, ....)
    // and we implement our custom transformation from string to our structure
    object MyStructure {
        def parse(str: String) : Option[MyStructure] = ...
    }
    
    val stream = KafkaUtils.createDirectStream... 
    // give the stream a schema using a case class
    val strucStream =  stream.flatMap(cr => MyStructure.parse(cr.value))
    
    strucStream.foreachRDD { rdd =>
        import sparkSession.implicits._
        val df = rdd.toDF()
        val prediction = model.transform(df)
        // do something with df
    }