I have been hearing a lot about pandas apply is slow and should be used as least as possible.
In the following situation, I need to compute the column Pct_Change_Adjusted
without using apply:
df = pd.DataFrame({'Date': ['2019-01-02', '2019-01-03', '2019-01-04'],
'Fund_ID': [9072, 9072, 9072],
'Fund_Series': ['A', 'A', 'A'],
'Value': [1020.0, 1040.4, 1009.188],
'Dividend': [0.0, 0.0, 52.02]})
I would like to do some adjusted weighting operation as given below after the grouping:
df['Pct_Change_Adjusted'] = df.groupby(['Fund_ID', 'Fund_Series'], as_index=False) \
.apply(lambda x: (x.Value + x.Dividend)/(x.Value.shift()+x.Dividend.shift()) ) \
.reset_index(drop=True).values[0]
print(df)
Date Dividend Fund_ID Fund_Series Value Pct_Change_Adjusted
0 2019-01-02 0.00 9072 A 1020.000 NaN
1 2019-01-03 0.00 9072 A 1040.400 0.02
2 2019-01-04 52.02 9072 A 1009.188 0.02
Are there any alternatives to apply()
here that will increase the efficiency or at least a second way of doing this?
Note: I am not talking about dask and other parallization, only pure pandas.
Yes, this is 100% vectorizable using groupby.pct_change
:
(df.Value + df.Dividend).groupby([df.Fund_ID, df.Fund_Series]).pct_change()
0 NaN
1 0.02
2 0.02
dtype: float64
df['Pct_Change_Adjusted'] = (df.assign(Foo=df['Value'] + df['Dividend'])
.groupby(['Fund_ID', 'Fund_Series'])
.Foo
.pct_change())
df
Date Fund_ID Fund_Series Value Dividend Pct_Change_Adjusted
0 2019-01-02 9072 A 1020.000 0.00 NaN
1 2019-01-03 9072 A 1040.400 0.00 0.02
2 2019-01-04 9072 A 1009.188 52.02 0.02