Search code examples
python-3.xpysparkrdd

Filter in PySpark/Python RDD


I have a list like this:

["Dhoni 35 WC 785623", "Sachin 40 Batsman 4500", "Dravid 45 Batsman 50000", "Kumble 41 Bowler 456431", "Srinath 41 Bowler 65465"]

After applying filter I want like this:

["Dhoni WC", "Sachin Batsman", "Dravid Batsman", "Kumble Bowler", "Srinath Bowler"]

I tried out this way

m = sc.parallelize(["Dhoni 35 WC 785623","Sachin 40 Batsman 4500","Dravid 45 Batsman 50000","Kumble 41 Bowler 456431","Srinath 41 Bowler 65465"])

n = m.map(lambda k:k.split(' '))

o = n.map(lambda s:(s[0])) o.collect()

['Dhoni', 'Sachin', 'Dravid', 'Kumble', 'Srinath']

q = n.map(lambda s:s[2])

q.collect()

['WC', 'Batsman', 'Batsman', 'Bowler', 'Bowler']


Solution

  • Provided, all your list items are of same format, one way to achieve this is with map.

    rdd = sc.parallelize(["Dhoni 35 WC 785623","Sachin 40 Batsman 4500","Dravid 45 Batsman 50000","Kumble 41 Bowler 456431","Srinath 41 Bowler 65465"])
    
    rdd.map(lambda x:(x.split(' ')[0]+' '+x.split(' ')[2])).collect()
    

    Output:

    ['Dhoni WC', 'Sachin Batsman', 'Dravid Batsman', 'Kumble Bowler', 'Srinath Bowler']