Search code examples
scalaapache-sparkapache-spark-sqlrddapache-spark-dataset

Spark Dataset aggregation similar to RDD aggregate(zero)(accum, combiner)


RDD has a very useful method aggregate that allows to accumulate with some zero value and combine that across partitions. Is there any way to do that with Dataset[T]. As far as I see the specification via Scala doc, there is actually nothing capable of doing that. Even the reduce method allows to do things only for binary operations with T as both arguments. Any reason why? And if there is anything capable of doing the same?

Thanks a lot!

VK


Solution

  • There are two different classes which can be used to achieve aggregate-like behavior in Dataset API:

    Both provide additional finalization method (evaluate and finish respectively) which is used to generate final results and can be used for both global and by-key aggregations.