From scikit-learn's documentation, the default penalty is "l2", and C (inverse of regularization strength) is "1". If I keep this setting penalty='l2' and C=1.0, does it mean the training algorithm is an unregularized logistic regression? In contrast, when C is anything other than 1.0, then it's a regularized logistic regression classifier?
No, it's not like that.
Let's have a look at the definitions within sklearn's user-guide:
C
is multiplied with the loss while the left-term (regularization) is untouchedC
to a huge number!
C
decreases the relevance of the regularization-penalty