Refer to lightgbm.cv, there are two parameters confusing me: metrics
and feval
. Based on my little knowledge on gbdt, evaluation metrics and evaluation function are both to compute the loss, such as auc, from a vector of predictions and a vector of true labels.
However, metrics
and function
sound like they are handling different tasks .
lightgbm.cv(params,metrics='auc', feval='ks')
, will feval='ks'
override metrics='auc'
?To start with, the general notions of metrics
and function
are not at all different: from a mathematical viewpoint, a metric is a function (Wikipedia entry). And although here the notion of metric is more broad, the argument still holds.
More specifically on your question; from the documentation page you have linked:
- metrics (string, list of strings or None, optional (default=None)) – Evaluation metrics to be monitored while CV. If not None, the metric in
params
will be overridden.- feval (callable or None, optional (default=None)) – Custom evaluation function.
Notice 1) the plural metrics, which can be a list of strings 2) the term custom in feval
.
To make a long story short:
You can indeed use more than one available metrics in the metrics
argument; your example should be:
lightgbm.cv(params,metrics=['auc','ks'])
feval
should only be used if, additionally to whatever metrics
you may use from the readily available ones, you also want a custom metric you have defined yourself; see an example here, where metric='auc'
and feval = my_err_rate
are used simultaneously and after my_err_rate
has been defined.