When I run LGBM with early stopping, it gives me the scores corresponding to its best iteration.
When I try to reproduce these scores myself, I get different numbers.
import lightgbm as lgb
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import KFold
data = load_breast_cancer()
X = pd.DataFrame(data.data)
y = pd.Series(data.target)
lgb_params = {'boosting_type': 'dart', 'random_state': 42}
folds = KFold(5)
for train_idx, val_idx in folds.split(X):
X_train, X_valid = X.iloc[train_idx], X.iloc[val_idx]
y_train, y_valid = y.iloc[train_idx], y.iloc[val_idx]
model = lgb.LGBMRegressor(**lgb_params, n_estimators=10000, n_jobs=-1)
model.fit(X_train, y_train,
eval_set=[(X_valid, y_valid)],
eval_metric='mae', verbose=-1, early_stopping_rounds=200)
y_pred_valid = model.predict(X_valid)
print(mean_absolute_error(y_valid, y_pred_valid))
I was expecting that
valid_0's l1: 0.123608
would match my own calculation from mean_absolute_error
, but it doesn't. Indeed, here is the top part of my output:
Training until validation scores don't improve for 200 rounds.
Early stopping, best iteration is:
[631] valid_0's l2: 0.0515033 valid_0's l1: 0.123608
0.16287265537021847
I'm using version '2.2.1' of lightgbm.
If you update your LGBM version, you will get
"UserWarning: Early stopping is not available in dart mode"
please refer to this issue for details about it. What you can do is to retrain a model using the best number of boosting rounds.
results = model.evals_result_['valid_0']['l1']
best_perf = min(results)
num_boost = results.index(best_perf)
print('with boost', num_boost, 'perf', best_perf)
model = lgb.LGBMRegressor(**lgb_params, n_estimators=num_boost+1, n_jobs=-1)
model.fit(X_train, y_train, verbose=-1)