Search code examples
pythonscikit-learnregressionmse

Multivariable linear regression doesn't get more accurate with higher polynomial degree?


I'm computing the MSE on the training set so I expect the MSE to decrease when using higher polynoms. However, from degree 4 to 5, the MSE increases significantly. What could be the cause?

import pandas as pd, numpy as np
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt

path = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/automobileEDA.csv"
df = pd.read_csv(path)
r=[]
max_degrees = 10

y = df['price'].astype('float')
x = df[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']].astype('float')

for i in range(1,max_degrees+1):
    Input = [('scale', StandardScaler()), ('polynomial', PolynomialFeatures(degree=i)), ('model', LinearRegression())]
    pipe = Pipeline(Input)
    pipe.fit(x,y)
    yhat = pipe.predict(x)
    r.append(mean_squared_error(yhat, y))
    print("MSE for MLR of degree "+str(i)+" = "+str(round(mean_squared_error(yhat, y)/1e6,1)))

plt.figure(figsize=(10,3))
plt.plot(list(range(1,max_degrees+1)),r)
plt.show()

Results:

image


Solution

  • Originally, you have 200 observations in y and 4 features (columns) in X, which you then scale and transform as polynomial features.

    Degree 4 thus has 120 < 200 polynomial features, whereas degree 5 is the first to have 210 > 200 polynomial features, namely more features than observations.

    When there are more features than observations, linear regression is ill-defined and should not be used, as explained here. This may explain the sudden deterioration in fitting the train set when advancing from degree 4 to degree 5. For higher degrees, it appears that the LR solver was nevertheless able to overfit the train data.