Search code examples
scalaapache-sparkapache-spark-sqlrdd

spark - scala: not a member of org.apache.spark.sql.Row


I am trying to convert a data frame to RDD, then perform some operations below to return tuples:

df.rdd.map { t=>
 (t._2 + "_" + t._3 , t)
}.take(5)

Then I got the error below. Anyone have any ideas? Thanks!

<console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
               (t._2 + "_" + t._3 , t)
                  ^

Solution

  • When you convert a DataFrame to RDD, you get an RDD[Row], so when you use map, your function receives a Row as parameter. Therefore, you must use the Row methods to access its members (note that the index starts from 0):

    df.rdd.map { 
      row: Row => (row.getString(1) + "_" + row.getString(2), row)
    }.take(5)
    

    You can view more examples and check all methods available for Row objects in the Spark scaladoc.

    Edit: I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:

    import org.apache.spark.sql.functions._
    val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))