I was trying to compare a logistic regression model and some ensemble models (bagging and boosting) with logistic regression as their base estimator. But, surprisingly, I got the same score for all three classifiers:
LogisticRegression()
BaggingClassifier(base_estimator=LogisticRegression())
AdaBoostClassifier(base_estimator=LogisticRegression())
This is my code, please help me.
lr = LogisticRegression()
lr.fit(x_train, y_train).score(x_test, y_test)
bagging_clf = BaggingClassifier(base_estimator=LogisticRegression(), n_estimators=50, bootstrap=True)
bagging_clf.fit(x_train, y_train).score(x_test, y_test)
adaboost_clf = AdaBoostClassifier(base_estimator=LogisticRegression(), learning_rate=1, n_estimators=50)
adaboost_clf.fit(x_train, y_train).score(x_test, y_test)
The score is 0.9063627039010026 for all classifiers.
Bagging and boosting work well with very overfit and very underfit base models, respectively. Doing either with logistic regression is unlikely to have a dramatic effect. You probably do get some changes, but you're only reporting the score
, which by default will be the accuracy score; if your test size is smallish, then there aren't too many different values possible.