Search code examples
pythonapache-sparkrdd

How to flatten nested lists in PySpark?


I have an RDD structure like:

rdd = [[[1],[2],[3]], [[4],[5]], [[6]], [[7],[8],[9],[10]]]

and I want it to become:

rdd = [1,2,3,4,5,6,7,8,9,10]

How do I write a map or reduce function to make it work?


Solution

  • You can for example flatMap and use list comprehensions:

    rdd.flatMap(lambda xs: [x[0] for x in xs])
    

    or to make it a little bit more general:

    from itertools import chain
    
    rdd.flatMap(lambda xs: chain(*xs)).collect()