How to convert this kind of data
"Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20"
"Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42"
"Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8"
to a spark RDD using Scala so we can get:
Row-Key-001, K1
Row-Key-001, A2
Row-Key-001, K3
Row-Key-001, B4
Row-Key-001, K5
Row-Key-001, C20
Row-Key-002, X1
Row-Key-002, Y6
Row-Key-002, Z15
Row-Key-002, X16
Row-Key-003, L4
Row-Key-003, M10
Row-Key-003, N12
Row-Key-003, O14
Row-Key-003, P13
I think we can split the input to get an array of lines and again split each line on ',' and then add to a Map like the 1st element of each row as the key and every alternate element as value.
But need help to implement in Scala.
If you have a text file with following data
Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20
Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42
Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8
then you can read it using sparkContext's textFile api as
val rdd = sc.textFile("path to the text file")
which gives you rdd data
, then you can parse it as following using map
and flatMap
rdd.map(_.split(", "))
.flatMap(x => x.tail.grouped(2).map(y => (x.head, y.head)))
which should give you result as
(Row-Key-001,K1)
(Row-Key-001,A2)
(Row-Key-001,K3)
(Row-Key-001,B4)
(Row-Key-001,K5)
(Row-Key-001,C20)
(Row-Key-002,X1)
(Row-Key-002,Y6)
(Row-Key-002,Z15)
(Row-Key-002,X16)
(Row-Key-003,L4)
(Row-Key-003,M10)
(Row-Key-003,N12)
(Row-Key-003,O14)
(Row-Key-003,P13)
I hope the answer is helpful