Search code examples
python-2.7machine-learningscikit-learnlogistic-regressionregularized

How to perform an unregularized logistic regression using scikit-learn?


From scikit-learn's documentation, the default penalty is "l2", and C (inverse of regularization strength) is "1". If I keep this setting penalty='l2' and C=1.0, does it mean the training algorithm is an unregularized logistic regression? In contrast, when C is anything other than 1.0, then it's a regularized logistic regression classifier?


Solution

  • No, it's not like that.

    Let's have a look at the definitions within sklearn's user-guide:

    enter image description here

    We see:

    • C is multiplied with the loss while the left-term (regularization) is untouched

    This means:

    • Without modifying the code you can never switch-off the regularization completely
    • But: you can approximately switch-off regularization by setting C to a huge number!
      • As the optimization tries to minimize the sum of regularization-penalty and loss, increasing C decreases the relevance of the regularization-penalty