Search code examples
pythonxmlapachedata-analysisapache-spark-sql

Pyspark String and list of objects


I have a string with a

https://hdchjhjedjekdn.com/{}_public.xml with a place holder and I have list of objects

201611339349202661, 201611309349201761, 201543179349200944, 201631099349200733, 201610909349200511, 201630749349201058, 201601319349200235, 201641069349200909, 201542999349200004, 201611319349201771, 201641329349200119, 201513219349200536, 201543159349201769, 201612029349200631, 201621339349202247, 201611259349200506, 201611829349200301, 201543169349201114, 201543209349204979, 201641039349200509, 201621309349200642, 201512789349200031, 201601939349200520

I would like to fill the placeholder with the list of objects.

Like:

s = (https://hdchjhjedjekdn.com/201611339349202661_public.xml, https://hdchjhjedjekdn.com/201611309349201761_public.xml, https://hdchjhjedjekdn.com/201543179349200944_public.xml,........)

Any help would be appreciated using pyspark


Solution

  • Parallelize your list of ids and broadcast the url string across workers. Then apply map to create the formatted string,

    >>>l = [201611339349202661, 201611309349201761, 201543179349200944, 201631099349200733, 201610909349200511, 201630749349201058, 201601319349200235, 201641069349200909, 201542999349200004, 201611319349201771, 201641329349200119, 201513219349200536, 201543159349201769, 201612029349200631, 201621339349202247, 201611259349200506, 201611829349200301, 201543169349201114, 201543209349204979, 201641039349200509, 201621309349200642, 201512789349200031, 201601939349200520]
    >>>rdd = sc.parallelize(l)
    >>>rdd.getNumPartitions()
    12 ## I have used 12 workers
    >>>brd_url = sc.broadcast('https://hdchjhjedjekdn.com/{}_public.xml')
    >>>rdd1 = rdd.map(lambda x:brd_url.value.format(x))
    >>>rdd1.take(2)
    ['https://hdchjhjedjekdn.com/201611339349202661_public.xml', 'https://hdchjhjedjekdn.com/201611309349201761_public.xml']
    

    Hope this helps.