Search code examples
pythonmachine-learningxgboost

Setting Tol for XGBoost Early Stopping


I am using XGBoost with early stopping. After about 1000 epochs, the model is still improving, but the magnitude of improvement is very low. I.e.:

 clf = xgb.train(params, dtrain, num_boost_round=num_rounds, evals=watchlist, early_stopping_rounds=10)

Is it possible to set a "tol" for early stopping? I.e.: the minimum level of improvement that is required to not trigger early stopping.

Tol is a common parameter in SKLearn models, such as MLPClassifier and QuadraticDiscriminantAnalysis. Thank you.


Solution

  • I do not think that there is a parameter tol in xgboost but you can set the early_stopping_round higher. This parameters means that if the performance on the test set does not improve for early_stopping_round times, then it stops. If you know that after 1000 epochs your model is still improving but very slowly, set early_stopping_round at 50 for example so it will be more "tolerante" about small changes in performance.