Search code examples
pythonpandasdataframedictionaryprocessing-efficiency

Iterating over df column list and replacing existing keys with their values from a dictionary efficiently python


I have a dict with items probabilities. I have a df with 5 milion rows that looks like that:

user_id   item_list
 U1       [I1,I3,I4]
 U2       [I5,I4]

and a dict: {'I1': 0.1, 'I4': 0.4, ..}

I am trying to go each row and creat a list with probailities, like that:

user_id   item_list     prob_list
 U1       [I1,I3,I4]    [0.1,0.4]
 U2       [I5,I4]       [0.4]
  • not all items have probabilities.

This is my code:

keys = list(prob_dict.keys())
df['prob_list'] = df.progress_apply(lambda x: get_probability(prob_dict=prob_dict,
keys=keys, item_list=x['item_list']),axis=1)

def get_probability(prob_dict, keys, item_list):


    prob_list = []
    for item in item_list:
        if item  in keys:
           prob = prob_dict[item ]
           prob_list.append(prob)

    if len(prob_list)>=1:
        return prob_list
    else:
        return np.nan

Since I am using tqdm I know how long its going to take (120 hours), which is too much and it's clearly not efficient.

Any ideas on how I can do it more efficently?


Solution

  • Use, Series.transform to transform each item in item_list to pandas Series and correspondingly map this series using Series.map to a mapping dictionary d, then use dropna to drop the NaN values:

    d = {'I1': 0.1, 'I4': 0.4}
    
    df['prob_list'] = (
        df['item_list'].transform(lambda s: pd.Series(s).map(d).dropna().values)
    )
    

    UPDATE (Use multiprocessing to improve the speed of mapping the item_list to prob_list):

    import multiprocessing as mp
    
    def map_prob(s):
        s = s[~s.isna()]
        return s.transform(
            lambda lst: [d[k] for k in lst if k in d] or np.nan)
    
    def parallel_map(item_list):
        splits = np.array_split(item_list, mp.cpu_count())
        pool = mp.Pool()
        prob_list = pd.concat(pool.map(map_prob, splits))
        pool.close()
        pool.join()
        return prob_list
    
    df['prob_list'] = parallel_map(df['item_list'])
    

    Result:

    # print(df)
      uer_id     item_list   prob_list
    0     U1  [I1, I3, I4]  [0.1, 0.4]
    1     U2      [I5, I4]       [0.4]