I need to make a scorer that calculates a score based on three lists/arrays y_true
, y_pred
, and sample_value
. The problem is that the scorer inside grid search calculates a score both for training and validation set and I don't know how to distinguish it. This is how I tried to do (full example):
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
def RF_metric(y_true, y_pred, sample_value):
dict_temp = {'y_pred': list(y_pred), 'y_true': list(y_true),
'sample_value': sample_value}
df_temp = pd.DataFrame(dict_temp)
df_temp['daily_score'] = df_temp[['y_pred', 'y_true', 'sample_value']].apply(
lambda row: row[2] if row[0] == row[1] else -row[2], axis=1)
df_temp['cum_score'] = df_temp['daily_score'].cumsum()
final_score = df_temp['cum_score'].to_list()[-1]
return final_score
param_dict = {'n_estimators': [100, 150, 200],
'max_depth': [5, 10, 15],
}
dates = pd.date_range(start='2020-01-01', end='2020-10-01')
df = pd.DataFrame({'Date': dates, 'A':np.random.rand(len(dates)), 'B':np.random.rand(len(dates)),
'label':np.random.choice([0,1],len(dates)), 'sample_value':np.random.rand(len(dates))})
train_start = pd.to_datetime('2020-01-01')
train_end = pd.to_datetime('2020-06-01')
val_start = train_end
val_end = pd.to_datetime('2020-07-01')
df_train = df[(train_start <= df['Date']) & (df['Date'] < train_end)]
df_val = df[(val_start <= df['Date']) & (df['Date'] < val_end)]
cv_list = [(list(df_train.index), list(df_val.index))]
X = df[['A', 'B']].values
Y = df[['label']].values.ravel()
clf = RandomForestClassifier()
scoring = make_scorer(RF_metric, sample_value = df_val['sample_value'].to_list())
gs = GridSearchCV(clf, param_dict, cv=cv_list, scoring=scoring,n_jobs=4)
gs.fit(X,Y)
and the error is ValueError: arrays must all be same length
Since upgrading your version helped, it seems the problem is that return_train_score
used to default to True
, so indeed your scoring
was passed the training set but with the validation's sample_value
.
One solution (that would help e.g. if you did still want the training score, or wanted to switch to a kfold cross-validation) is to not use the convenience function make_scorer
. It just returns a callable with signature (estimator, X, y)
where larger "score" is better. You can just write your own such callable, and then you have access to all of X
(including the column sample_value
!) instead of just the estimator's predictions.