I have a sample DF which I want to normalize based on 2 condtions
Creating sample DF:
sample_df = pd.DataFrame(np.random.randint(1,20,size=(10, 3)), columns=list('ABC'))
sample_df["date"]= ["2020-02-01","2020-02-01","2020-02-01","2020-02-01","2020-02-01",
"2020-02-02","2020-02-02","2020-02-02","2020-02-02","2020-02-02"]
sample_df["date"] = pd.to_datetime(sample_df["date"])
sample_df.set_index(sample_df["date"],inplace=True)
del sample_df["date"]
sample_df["A_cat"] = ["ind","sa","sa","sa","ind","ind","sa","sa","ind","sa"]
sample_df["B_cat"] = ["sa","ind","ind","sa","sa","sa","ind","sa","ind","sa"]
sample_df
print (sample_df)
OP:
A B C A_cat B_cat
date
2020-02-01 14 11 7 ind sa
2020-02-01 19 17 3 sa ind
2020-02-01 19 6 3 sa ind
2020-02-01 3 16 5 sa sa
2020-02-01 12 6 16 ind sa
2020-02-02 1 8 12 ind sa
2020-02-02 10 13 19 sa ind
2020-02-02 17 2 7 sa sa
2020-02-02 9 13 17 ind ind
2020-02-02 17 16 3 sa sa
Conditions to normalize:
1. Groupby based on index, and
2. Nomalize selected columns
For example if the selected columns are ["A","B"]
, it should first groupby index in this case 2020-02-01
and normalize the selected columns in the 5 rows of the group.
Other inputs:
selected_column = ["A","B"]
I can do this in a for loop
by iterating over the groups and concatenating the normalized values. So any suggestions for a more efficient/pandas based approach would be great.
Code Tried with Pandas:
from sklearn.preprocessing import StandardScaler
dfg = StandardScaler()
sample_df.groupby([sample_df.index.get_level_values(0)])[selected_columns].transform(dfg.fit_transform)
Error:
('Expected 2D array, got 1D array instead:\narray=[14. 19. 19. 3. 12.].\nReshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.', 'occurred at index A')
This works:
sample_df.groupby([sample_df.index.get_level_values(0)])[selected_column].transform(lambda x: (x-np.mean(x))/(np.std(x)))