Search code examples
pythonapache-sparkpysparkrdd

How to do a string transformation of an RDD?


I have some documents from which I have to extract each word, and then each document-wise aggregate the number of times that word occurred using Pyspark. I have managed to get it into the below format

["of#['d2:3', 'd4:10', 'd1:6', 'd3:13', 'd5:6', 'd6:9', 'd7:5']",
 "is#['d2:3', 'd4:8', 'd1:5', 'd3:1', 'd5:4', 'd6:6', 'd7:1']",
 "country#['d2:3', 'd1:1', 'd5:2', 'd6:2']",
 "in#['d2:5', 'd4:13', 'd1:2', 'd3:2', 'd5:2', 'd6:3', 'd7:3']",
 "seventh#['d2:1']"]

How can I do a transformation of the above rdd into something like

of#d2:3, d4:10, d1:6, d3:13, d5:6, d6:9, d7:5, 
is#d2:3, d4:8, d1:5, d3:1, d5:4, d6:6, d7:1, 
country#d2:3, d1:1, d5:2, d6:2,
in#d2:5, d4:13, d1:2, d3:2, d5:2, d6:3, d7:3,
seventh#d2:1

I have attempted the following line of code but I am getting an error. Would appreciate some inputs on where I am going wrong.

print(x.map(lambda x:str(x[0])+"#"+str(x[1])).take(5))

Solution

  • It seems you only want to remove the square brackets and single quotes from those string values.

    You can do something like this :

    import re
    
    rdd1 = rdd.map(lambda x: re.sub(r"[\['\]]", "", x))
    
    for i in rdd1.collect():
        print(i)
        
    # of#d2:3, d4:10, d1:6, d3:13, d5:6, d6:9, d7:5
    # is#d2:3, d4:8, d1:5, d3:1, d5:4, d6:6, d7:1
    # country#d2:3, d1:1, d5:2, d6:2
    # in#d2:5, d4:13, d1:2, d3:2, d5:2, d6:3, d7:3
    # seventh#d2:1