Search code examples
scalaapache-sparkrddlinux-disk-free

Spark: Split is not a member of org.apache.spark.sql.Row


Below is my code from Spark 1.6. I am trying to convert it to Spark 2.3 but I am getting error for using split.

Spark 1.6 code:

val file = spark.textFile(args(0))
val mapping = file.map(_.split('/t')).map(a => a(1))
mapping.saveAsTextFile(args(1))

Spark 2.3 code:

val file = spark.read.text(args(0))
val mapping = file.map(_.split('/t')).map(a => a(1)) //Getting Error Here
mapping.write.text(args(1))

Error Message:

value split is not a member of org.apache.spark.sql.Row

Solution

  • Unlike spark.textFile which returns a RDD, spark.read.text returns a DataFrame which is essentially a RDD[Row]. You could perform map with a partial function as shown in the following example:

    // /path/to/textfile:
    // a    b   c
    // d    e   f
    
    import org.apache.spark.sql.Row
    
    val df = spark.read.text("/path/to/textfile")
    
    df.map{ case Row(s: String) => s.split("\\t") }.map(_(1)).show
    // +-----+
    // |value|
    // +-----+
    // |    b|
    // |    e|
    // +-----+