Search code examples
pythonscikit-learnlogistic-regressionstatsmodels

Sklearn and StatsModels give very different logistic regression answers


I am doing logistic regression on a boolean 0/1 dataset (predicting the probability of a certain age giving you a salary over some amount), and I am getting very different results with sklearn and StatsModels, where sklearn is very wrong.

I have set the sklearn penalty to None and the intercept term to false to make the function more similar to StatsModels, but I can't see how to make sklearn give a sensible answer.

The grey lines are the original datapoints at 0 or 1, I just scaled 1 down to 0.1 on the plot to be visible.

Variables:

# X and Y
X = df.age.values.reshape(-1,1)
X_poly = PolynomialFeatures(degree=4).fit_transform(X)
y_bool = np.array(df.wage.values > 250, dtype = "int")

# Generate a sequence of ages
age_grid = np.arange(X.min(), X.max()).reshape(-1,1)
age_grid_poly =  PolynomialFeatures(degree=4).fit_transform(age_grid)

Code is the following:

# sklearn Model
clf = LogisticRegression(penalty = None, fit_intercept = False,max_iter = 300).fit(X=X_poly, y=y_bool)
preds = clf.predict_proba(age_grid_poly)

# Plot
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(X ,y_bool/10, s=30, c='grey', marker='|', alpha=0.7)
plt.plot(age_grid, preds[:,1], color = 'r', alpha = 1)
plt.xlabel('Age')
plt.ylabel('Wage')
plt.show()

sklearn result

# StatsModels
log_reg = sm.Logit(y_bool, X_poly).fit()
preds = log_reg.predict(age_grid_poly)
# Plot
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(X ,y_bool/10, s=30, c='grey', marker='|', alpha=0.7)
plt.plot(age_grid, preds, color = 'r', alpha = 1)
plt.xlabel('Age')
plt.ylabel('Wage')
plt.show()

StatsModels result


Solution

  • This appears to be because of sklearn's implementation being very scale-dependent (and the polynomial terms being quite large). By scaling the data first, I get qualitatively the same result.

    # sklearn Model
    from sklearn.preprocessing import StandardScaler
    from sklearn.pipeline import Pipeline
    
    clf = Pipeline([
        ('scale', StandardScaler()),
        ('lr', LogisticRegression(penalty='none', fit_intercept=True, max_iter=1000)),
    ]).fit(X=X_poly, y=y_bool)
    preds = clf.predict_proba(age_grid_poly)
    
    # Plot
    fig, ax = plt.subplots(figsize=(8,6))
    ax.scatter(X ,y_bool/10, s=30, c='grey', marker='|', alpha=0.7)
    plt.plot(age_grid, preds[:,1], color = 'r', alpha = 1)
    plt.xlabel('Age')
    plt.ylabel('Wage')
    plt.show()
    

    Note that we need to set fit_intercept=True in this case, because the StandardScaler kills the constant column (making it all zeros) coming from the PolynomialFeatures.