Search code examples
scalaapache-sparkrdd

Spark Rdd - using sortBy with multiple column values


After grouping my dataset , it look like this

(AD_PRES,1)
(AD_VP,2)
(FI_ACCOUNT,5)
(FI_MGR,1)
(IT_PROG,5)
(PU_CLERK,5)
(PU_MAN,1)
(SA_MAN,5)
(ST_CLERK,20)
(ST_MAN,5)

Here i want to sort by key as descending and value as ascending . So tried below lines of code.

 emp_data.map(s => (s.JOB_ID, s.FIRST_NAME.concat(",").concat(s.LAST_NAME))).groupByKey().map({
    case (x, y) => (x, y.toList.size)
  }).sortBy(s => (s._1, s._2))(Ordering.Tuple2(Ordering.String.reverse, Ordering.Int.reverse))

it is causing below exception.

not enough arguments for expression of type (implicit ord: Ordering[(String, Int)], implicit ctag: scala.reflect.ClassTag[(String, Int)])org.apache.spark.rdd.RDD[(String, Int)]. Unspecified value parameter ctag.

Solution

  • RDD.sortBy takes both ordering and class tags as implicit arguments.

    def sortBy[K](f: (T) ⇒ K, ascending: Boolean = true, numPartitions: Int = this.partitions.length)(implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T] 
    

    You cannot just provide a subset of these and expect things to work. Instead you can provide block local implicit ordering:

    { 
       implicit val ord = Ordering.Tuple2[String, Int](Ordering.String.reverse, Ordering.Int.reverse)
       emp_data.map(s => (s.JOB_ID, s.FIRST_NAME.concat(",").concat(s.LAST_NAME))).groupByKey().map({
         case (x, y) => (x, y.toList.size)
       }).sortBy(s => (s._1, s._2))
    }
    

    though you should really use reduceByKey not groupByKey in such case.