Search code examples
scalaapache-sparkrddapache-spark-dataset

Convert Spark RDD to dataset


I am trying to make a kmean clustering after some text mining but I can't find how to convert the result of ParseWikipedia.termDocumentMatrix in a dataset required by the kmean.fit method

scala> val (termDocMatrix, termIds, docIds, idfs) = ParseWikipedia.termDocumentMatrix(lemmas, stopWords, numTerms, sc)
scala> val kmeans = new KMeans().setK(5).setMaxIter(200).setSeed(1L)
scala> termDocMatrix.take(1)
res24: Array[org.apache.spark.mllib.linalg.Vector] = Array((1000,[32,166,200,223,577,645,685,873,926],[0.18132966949934762,0.3777537726516676,0.3178848913768969,0.43380819546465704,0.30604090845847254,0.46007361524957147,0.2076406414508386,0.2995665853335863,0.1742843713808876]))

scala> val modele = kmeans.fit(termDocMatrix)
<console>:66: error: type mismatch;
 found   : org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]
 required: org.apache.spark.sql.Dataset[_]
       val modele = kmeans.fit(termDocMatrix)

I tried some conversions but i always have errors

scala> import spark.implicits._
import spark.implicits._

scala> val ss=org.apache.spark.sql.SparkSession.builder().getOrCreate()
scala> ss.createDataset(termDocMatrix)
<console>:67: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
   ss.createDataset(termDocMatrix)

and others (with a expected result as it was not to datasets)

val termDocRows = termDocMatrix.map(org.apache.spark.sql.Row(_))
val schemaVecteurs = StructType(Seq(StructField("features", VectorType, true)))
val termDocVectors = spark.createDataFrame(termDocRows, schemaVecteurs)
val termDocMatrixDense = termDocMatrix.map(e => e.toDense)

(and try to kmeans.fit each of them). The only one that gives a different error is termDocVectors

val modele = kmeans.fit(termDocVectors)
18/01/05 01:14:52 ERROR Executor: Exception in task 0.0 in stage 560.0 (TID 1682)
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: org.apache.spark.mllib.linalg.SparseVector is not a valid external type for schema of vector
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else newInstance(class org.apache.spark.ml.linalg.VectorUDT).serialize AS features#75
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:290)

Do someone has a clue ? Thanks for your help

In addition after testing the clues provided :

Where can I apply toDS ?

scala> termDocMatrix.toDS
<console>:69: error: value toDS is not a member of org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]
   termDocMatrix.toDS

With Tuple...
I still have an error (different, this time)

val ds = spark.createDataset(termDocMatrix.map(Tuple1.apply)).withColumnRenamed("_1", "features")
ds: org.apache.spark.sql.DataFrame = [features: vector]
scala> val modele = kmeans.fit(ds)
java.lang.IllegalArgumentException: requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.

The initial problem seems to be solved. Now i'm facing a new one with the fact that i computeSVD from a mllib.Rowmatrix and kmeans seems to wait ml vectors. I just have to find how to compute a SVD in the ml packages...


Solution

  • Spark's Dataset API doesn't come with encoders for org.apache.spark.mllib.linalg.Vector. That said, you can try convert a RDD of MLlib Vectors to a Dataset by first mapping the Vectors into Tuple1s like in the following example to see if your ML model takes it:

    import org.apache.spark.mllib.linalg.{Vector, Vectors}
    
    val termDocMatrix = sc.parallelize(Array(
      Vectors.sparse(
        1000, Array(32, 166, 200, 223, 577, 645, 685, 873, 926), Array(
          0.18132966949934762, 0.3777537726516676, 0.3178848913768969,
          0.43380819546465704, 0.30604090845847254, 0.46007361524957147,
          0.2076406414508386, 0.2995665853335863, 0.1742843713808876
      )),
      Vectors.sparse(
        1000, Array(74, 154, 343, 405, 446, 538, 566, 612 ,732), Array(
          0.12128098267647237, 0.2499114848264329, 0.1626128536458679,
          0.12167467201712565, 0.2790928578869498, 0.24904429178306794,
          0.10039172907499895, 0.22803472531961744, 0.36408630055671115
      ))
    ))
    // termDocMatrix: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = ...
    
    val ds = spark.createDataset(termDocMatrix.map(Tuple1.apply)).
      withColumnRenamed("_1", "features")
    // ds: org.apache.spark.sql.Dataset[(org.apache.spark.mllib.linalg.Vector,)] = [features: vector]
    
    ds.show
    // +--------------------+
    // |            features|
    // +--------------------+
    // |(1000,[32,166,200...|
    // |(1000,[74,154,343...|
    // +--------------------+