I want to construct a distance matrix using values from a dataframe in pyspark. What I have right now is
+----+-------------+
| id | list |
+----+-------------+
| 1 | [a, b, ...] |
+----+-------------+
| 2 | [c, d, ...] |
+----+-------------+
| 3 | [e, f, ...] |
+----+-------------+
I want to use my own distance function and do something like
for i in range(len(ids)):
for j in range(i + 1, len(ids)):
dist = calculate_distance(features[i], features[j])
add_row_to_distance_df([ids[i], ids[j], dist])
EDIT: Expected output is
+-----+-----+-----------------------------+
| id1 | id2 | dist |
+-----+-----+-----------------------------+
| 1 | 2 | d([a, b, ...], [c, d, ...]) |
+-----+-----+-----------------------------+
| 1 | 3 | d([a, b, ...], [e, f, ...]) |
+-----+-----+-----------------------------+
| 2 | 3 | d([c, d, ...], [e, f, ...]) |
+-----+-----+-----------------------------+
How do I go about doing this?
You can use cartesian()
and filter()
just the necessary triangle, e.g.:
In []:
def calculate_distance(a, b):
return f'd({a}, {b})' # Py 3.6
rdd = sc.parallelize([(1, ['a', 'b', 'c']), (2, ['c', 'd', 'e']), (3, ['e', 'f', 'g'])])
(rdd.cartesian(rdd)
.filter(lambda x: x[0][0] < x[1][0])
.map(lambda x: (x[0][0], x[1][0], calculate_distance(x[0][1], x[1][1])))
.collect())
Out[]:
[(1, 2, "d(['a', 'b', 'c'], ['c', 'd', 'e'])"),
(1, 3, "d(['a', 'b', 'c'], ['e', 'f', 'g'])"),
(2, 3, "d(['c', 'd', 'e'], ['e', 'f', 'g'])")]