Here I was trying to implement manual f1_score function for evaluation but in code this is not calling manual_scoring function {No print o/p} instead of that failing with error:
" call() missing 1 required positional argument: 'y_true' "
If I remove feval code is working fine.
def maual_scoring(y_hat, data):
print("I am here")
y_true = data.get_label()
y_hat = np.argmax(y_hat, axis =1 ) #multi classification problem
return 'f1', f1_score(y_true, y_hat), True
model = lgb.train(
params = lgb_params.copy(),
train_set=lgb_model,
valid_sets=[lgb_model, lgb_val],
valid_names=['Train', 'Validation'],
verbose_eval=100,
feval=maual_scoring,
num_boost_round=99999,
early_stopping_rounds=100
)
Got ans: needs to reshape pred
def maual_scoring(preds, dtrain):
labels = dtrain.get_label()
preds = preds.reshape(-1, 4) # I should have reshaped pred
preds = preds.argmax(axis = 1)
f_score = f1_score(preds, labels, average = 'macro')
return 'f1_score', f_score, True
feval (callable or None, optional (default=None)) – Customized evaluation function. Should accept two parameters: preds, train_data. For multi-class task, the preds is group by class_id first, then group by row_id. If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i]. Note: should return (eval_name, eval_result, is_higher_better) or list of such tuples. To ignore the default metric corresponding to the used objective, set the metric parameter to the string "None" in params.
since this is multi-classification problem so we need reshape prediction to get
o/p in similar shape as model.predict_proba()