Basic Info
lgbm.train(
) with early_stopping
calculates the objective function & feval
scores after each boost round, and we can make it print those every verbose_eval
rounds, like so:
bst=lgbm.train(**params)
[10] valid_0's binary_logloss: 0.215654 valid_0's BinaryError: 0.00775126
[20] valid_0's binary_logloss: 0.303113 valid_0's BinaryError: 0.00790619
[30] valid_0's binary_logloss: 0.358056 valid_0's BinaryError: 0.0838744
[40] valid_0's binary_logloss: 0.386763 valid_0's BinaryError: 0.138462
[50] valid_0's binary_logloss: 0.411467 valid_0's BinaryError: 0.176986
Question:
Is there any way to access a list of these scores for each boosting round?
The closest thing I can find in the documentation
& bst.__dict__
is bst.best_score
defaultdict(collections.OrderedDict,
{'valid_0': OrderedDict([('binary_logloss', 0.4233895131745753),
('BinaryError', 0.194285077972568)])})
You can use evals_result
parameter as follows;
evals_result = {}
bst=lgbm.train(evals_result=evals_result, valid_sets = [valid_set, train_set],
valid_names = [‘valid’, ‘train’], **params)
evals_result
>>> {‘train’: {‘logloss’: [‘0.36483’, ‘0.32617’, …]}, ‘valid’: {‘logloss’: [‘0.479168’, ‘0.317850’, …]}}
You will have a dictionary for both train and valid set scores for each boosting round.