Search code examples
pythonsortingapache-sparkpysparkrdd

PySpark takeOrdered Multiple Fields (Ascending and Descending)


The takeOrdered Method from pyspark.RDD gets the N elements from an RDD ordered in ascending order or as specified by the optional key function as described here pyspark.RDD.takeOrdered. The example shows the following code with one key:

>>> sc.parallelize([10, 1, 2, 9, 3, 4, 5, 6, 7], 2).takeOrdered(6, key=lambda x: -x)
[10, 9, 7, 6, 5, 4]

Is it also possible to define more keys e.g. x,y,z for data that has 3 columns?

The keys should be in different order such as x= asc, y= desc, z=asc. That means if the first value x of two rows are equal then the second value y should be used in descending order.


Solution

  • For numeric you could write:

    n = 1
    rdd = sc.parallelize([
        (-1, 99, 1), (-1, -99, -1), (5, 3, 8), (-1, 99, -1)
    ])
    
    rdd.takeOrdered(n, lambda x: (x[0], -x[1], x[2]))
    # [(-1, 99, -1)]
    

    For other objects you can define some type of record type and define your own set of rich comparison methods:

    class XYZ(object):
        slots = ["x", "y", "z"]
    
        def __init__(self, x, y, z):
            self.x, self.y, self.z = x, y, z
    
        def __eq__(self, other):
            if not isinstance(other, XYZ):
                return False
            return self.x == other.x and self.y == other.y and self.z == other.z
    
        def __lt__(self, other):
            if not isinstance(other, XYZ):
                raise ValueError(
                    "'<' not supported between instances of 'XYZ' and '{0}'".format(
                        type(other)
                ))
            if self.x == other.x:
                if self.y == other.y:
                    return self.z < other.z
                else:
                    return self.y > other.y
            else:
                return self.x < other.x
    
        def __repr__(self):
            return "XYZ({}, {}, {})".format(self.x, self.y, self.z)
    
        @classmethod
        def from_tuple(cls, xyz):
            x, y, z = xyz
            return cls(x, y, z)
    

    and then:

    from xyz import XYZ
    
    rdd.map(XYZ.from_tuple).takeOrdered(n)
    # [XYZ(-1, 99, -1)]
    

    In practice just use SQL:

    from pyspark.sql.functions import asc, desc
    
    rdd.toDF(["x", "y", "z"]).orderBy(asc("x"), desc("y"), asc("z")).take(n)
    # [Row(x=-1, y=99, z=-1)]