I am currently working on making a table that contains the ratios of pairs, applied with sort of minmaxscalar logic. However, I've suffered a very long and expensive time complexity with my code; one of the big reasons is the size of data. However, I am looking for more efficient code to do the same.
Here is the code that I've written and will explain what I do in the code.
import pandas as pd
import numpy as np
import itertools
indexColumn = 'id'
limit_ratio = 0.6
result_data = pd.DataFrame([['id1', 1], ['id2', 1], ['id3', 1], ['id1', 2], ['id3', 2], ['id1', 4], ['id2', 4], ['id1', 5], ['id2', 5], ['id1', 6], ['id5', 6], ['id5', 7], ['id6', 7]], columns=['id', 'labels'])
result_data = result_data.groupby('labels')[indexColumn].apply(lambda x: list(itertools.combinations(x, 2))).reset_index()
result_data = result_data.explode(indexColumn)
result_data = result_data.groupby(indexColumn)['labels'].agg('count').rename('count').reset_index()
result_data[['id1', 'id2']] = pd.DataFrame(result_data[indexColumn].tolist(),index=result_data.index)
At this point, result_data
looks like
id | count | id1 | id2 |
---|---|---|---|
(id1,id2) | 3 | id1 | id2 |
(id1,id3) | 2 | id1 | id3 |
(id1,id5) | 1 | id1 | id5 |
(id2,id3) | 1 | id2 | id3 |
(id5,id6) | 1 | id5 | id6 |
Basically, the code above counts the number of same labels that a pair has.
For example, id1
and id2
has label 1 data. Then, the count will increment by 1.
(I especially want to thank Joseph Fernandez, who helped me with the above code in the past)
result_data['ratio'] = result_data['count'] / float(result_data['count'].max())
result_data = result_data.sort_values(by='ratio', ascending=False)
Then, the result_data
is
id | count | id1 | id2 | ratio |
---|---|---|---|---|
(id1,id2) | 3 | id1 | id2 | 1.000 |
(id1,id3) | 2 | id1 | id3 | 0.667 |
(id1,id5) | 1 | id1 | id5 | 0.333 |
(id2,id3) | 1 | id2 | id3 | 0.333 |
(id5,id6) | 1 | id5 | id6 | 0.333 |
For the ratio
here, I calculate
This is a simple version of data to explain the code, but I am actually having the result_data
which has 120 million number of rows, at this point. The following code is the problem that I have with exponential computation time because of its size.
if limit_ratio is not None:
result_data[['ratio1', 'ratio2']] = np.NaN
idlist = np.unique(result_data[['id1', 'id2']])
for id in idlist:
tmp = result_data[(result_data['id1'] == id) | (result_data['id2'] == id)][['id1', 'id2', 'count', 'ratio']]
tmp['ratio1'] = tmp['count'] / float(tmp['count'].max())
tmp['ratio2'] = tmp['ratio1']
tmp.loc[tmp['id1'] == id, ['ratio2']] = np.NaN
tmp.loc[tmp['id2'] == id, ['ratio1']] = np.NaN
tmp = tmp[['id1', 'id2', 'ratio1', 'ratio2']]
result_data = pd.merge(result_data, tmp, how='outer', on=['id1', 'id2'])
result_data['ratio1'] = result_data['ratio1_x'].where(result_data['ratio1_x'].notna(), result_data['ratio1_y'])
result_data['ratio2'] = result_data['ratio2_x'].where(result_data['ratio2_x'].notna(), result_data['ratio2_y'])
result_data = result_data[['id1', 'id2', 'count', 'ratio', 'ratio1', 'ratio2']]
Then, the result will be
id1 | id2 | count | ratio | ratio1 | ratio2 |
---|---|---|---|---|---|
id1 | id2 | 3 | 1.000 | 1.000 | 1.0 |
id1 | id3 | 2 | 0.667 | 0.667 | 1.0 |
id1 | id5 | 1 | 0.333 | 0.333 | 1.0 |
id2 | id3 | 1 | 0.333 | 0.333 | 0.5 |
id5 | id6 | 1 | 0.333 | 1.000 | 1.0 |
To explain more about ratio1 and ratio2,
ratio1
is calculated by
ratio2
is calculated by
After all, our final goal is to filter out rows with a threshold
.
result_data = result_data[(result_data['ratio1'] >= float(limit_ratio)) | (result_data['ratio2'] >= float(limit_ratio))]
This will give me
id1 | id2 | count | ratio | ratio1 | ratio2 |
---|---|---|---|---|---|
id1 | id2 | 3 | 1.000 | 1.000 | 1.0 |
id1 | id3 | 2 | 0.667 | 0.667 | 1.0 |
id1 | id5 | 1 | 0.333 | 0.333 | 1.0 |
id5 | id6 | 1 | 0.333 | 1.000 | 1.0 |
Here, the code takes 12 minutes for each loop. The data has 50,000 unique IDs, expecting that 12 * 50000 = 60,000 minutes = 10,000 hours in calculation. For your information, I am currently using python 3.8.8, pandas == 1.2.3, and numpy == 1.19.5.
>>> result_data
id count id1 id2 ratio
0 (id1, id2) 3 id1 id2 1.000000
1 (id1, id3) 2 id1 id3 0.666667
2 (id1, id5) 1 id1 id5 0.333333
3 (id2, id3) 1 id2 id3 0.333333
4 (id5, id6) 1 id5 id6 0.333333
melted = result_data.melt('count', ['id1', 'id2'])
maxima = melted.groupby('value')['count'].max()
result_data['ratio1'] = result_data['count'] / result_data['id1'].map(maxima)
result_data['ratio2'] = result_data['count'] / result_data['id2'].map(maxima)
result_data = result_data.query("ratio1 >= @limit_ratio or ratio2 >= @limit_ratio")
Melt
the dataframe by specifying id_vars
as count
and value_vars
as id1, id2
>>> melted
count variable value
0 3 id1 id1
1 2 id1 id1
2 1 id1 id1
3 1 id1 id2
4 1 id1 id5
5 3 id2 id2
6 2 id2 id3
7 1 id2 id5
8 1 id2 id3
9 1 id2 id6
Now group
the melted
dataframe by value
column and aggregate count
using max
to calculate maximum
count value per id
>>> maxima
value
id1 3
id2 3
id3 2
id5 1
id6 1
Name: count, dtype: int64
map
the calculated maxima
on the columns id1
and id2
>>> result_data['id1'].map(maxima)
0 3
1 3
2 3
3 3
4 1
Name: id1, dtype: int64
>>> result_data['id1'].map(maxima)
0 3
1 2
2 1
3 2
4 1
Name: id2, dtype: int64
Now divide the count
column with the above mapped id1
and id2
columns respectively to calculate ratio1
and ratio2
>>> result_data[['ratio1', 'ratio2']]
ratio1 ratio2
0 1.000000 1.0
1 0.666667 1.0
2 0.333333 1.0
3 0.333333 0.5
4 1.000000 1.0
Query
the dataframe to filter out the rows based on the given threshold
value in limit_ratio
>>> result_data
id count id1 id2 ratio ratio1 ratio2
0 (id1, id2) 3 id1 id2 1.000000 1.000000 1.0
1 (id1, id3) 2 id1 id3 0.666667 0.666667 1.0
2 (id1, id5) 1 id1 id5 0.333333 0.333333 1.0
4 (id5, id6) 1 id5 id6 0.333333 1.000000 1.0