Search code examples
pythonpython-3.xpysparkbigdatardd

joining two string in a single RDD to form new RDD in pyspark


I have an rdd & after apply collection, it is like below;

rdd = [('Amazon', '2016/01/09', '17:06:24', '17:10:03'),('Amazon', '2016/02/09', '17:06:55', '17:10:00'),('Amazon', '2016/02/09', '17:10:02', '17:19:00'),('Amazon', '2016/02/09', '17:13:09', '17:19:00'),('Aliexpress', '2016/03/09', '17:00:40', '17:23:00'),('Aliexpress', '2016/03/09', '17:03:50', '17:12:05'),('Aliexpress', '2016/03/09', '17:10:12', '17:12:38'),('Aliexpress', '2016/03/09', '17:13:23', '17:23:00')]

but I want to transformed the rdd in such a way that after apply collection I want to see the output like below;

Newrdd = [('Amazon 01','17:06:24', '17:10:03'),('Amazon 02','17:06:55', '17:10:00'),('Amazon 02','17:10:02', '17:19:00'),('Amazon 02','17:13:09', '17:19:00'),('Aliexpress 09', '17:00:40', '17:23:00'),('Aliexpress 03', '17:03:50', '17:12:05'),('Aliexpress 03','17:10:12', '17:12:38'),('Aliexpress 03', '17:13:23', '17:23:00')]

I want to join for example Amazon with 01 ( 01 is the month, comes from '2016/01/09').

I did like this;

Newrdd = rdd.map(lambda y: y[0].join((y[1].split('/')[1])))
print(Newrdd.collect())

But I am not getting the desired output collection. Anyone could tell me why?


Solution

  • I was able to solve like below;

    Newrdd = rdd.map(lambda y: (y[0]+' '+y[1].split('/')[1], y[2], y[3]))