Search code examples
scikit-learnlogistic-regression

Do Sklearn's SAG and lbfgs penalize the intercept?


We know that penalizing intercept in sklearn implementation is a "design mistake" that we have to deal with. One work around is to set intercept_scaling to a very large number, per the documentation:

Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.

However, same documentation says that this parameter is useful only when solver='liblinear'.

My question:

Do other solvers penalise the intercept? I tried to look at the source and I think they don't but I am not sure and I couldn't find clear answer anywhere.


Solution

  • The only solver of LogisticRegression that penalizes the intercept is "liblinear".

    See the official documentation:

    enter image description here