Search code examples
pythonlinear-regressiontransform

OLS fit for python with coefficient error and transformed target


There seems to be two methods for OLS fits in python. The Sklearn one and the Statsmodel one. I have a preference for the statsmodel one because it gives the error on the coefficients via the summary() function. However, I would like to use the TransformedTargetRegressor from sklearn to log my target. It would seem that I need to choose between getting the error on my fit coefficients in statsmodel and being able to transform my target in statsmodel. Is there a good way to do both of these at the same time in either system?

In stats model it would be done like this

import statsmodels.api as sm
X = sm.add_constant(X)
ols = sm.OLS(y, X)
ols_result = ols.fit()
print(ols_result.summary())

To return the fit with the coefficients and the error on them

For Sklearn you can use the TransformedTargetRegressor

from sklearn.compose import TransformedTargetRegressor
from sklearn.linear_model import LinearRegression
regr = TransformedTargetRegressor(regressor=LinearRegression(),func=np.log1p, inverse_func=np.expm1)
regr.fit(X, y)
print('Coefficients: \n', regr.coef_)

But there is no way to get the error on the coefficients without calculating them yourself. Is there a good way to get the best of both worlds?

EDIT

I found a good example for the special case I care about here

https://web.archive.org/web/20160322085813/http://www.ats.ucla.edu/stat/mult_pkg/faq/general/log_transformed_regression.htm


Solution

  • Just to add a lengthy comment here, I believe that TransformedTargetRegressor does not do what you think it does. As far as I can tell, the inverse transformation function is only applied when the predict method is called. It does not express the coefficients in units of the untransformed outcome.

    Example:
    import pandas as pd
    import statsmodels.api as sm
    
    from sklearn.compose import TransformedTargetRegressor
    from sklearn.linear_model import LinearRegression
    import numpy as np
    from sklearn import datasets
    
    create some sample data:
    df = pd.DataFrame(datasets.load_iris().data)
    df.columns = datasets.load_iris().feature_names
    
    X = df.loc[:,['sepal length (cm)', 'sepal width (cm)']]
    y = df.loc[:, 'petal width (cm)']
    
    Sklearn first:
    regr = TransformedTargetRegressor(regressor=LinearRegression(),func=np.log1p, inverse_func=np.expm1)
    regr.fit(X, y)
    
    print(regr.regressor_.intercept_)
    for coef in regr.regressor_.coef_:
        print(coef)
    #-0.45867804195769357
    # 0.3567583897503805
    # -0.2962942997303887
    
    Statsmodels on transformed outcome:
    X = sm.add_constant(X)
    ols_trans = sm.OLS(np.log1p(y), X).fit()
    print(ols_trans.params)
    
    #const               -0.458678
    #sepal length (cm)    0.356758
    #sepal width (cm)    -0.296294
    #dtype: float64
    

    You see that in both cases, the coefficients are identical.That is, using the regression with TransformedTargetRegressor yields the same coefficients as statsmodels.OLS with the transformed outcome. TransformedTargetRegressor does not backtranslate the coefficients into the original untransformed space. Note that the coefficients would be non-linear in the original space unless the transformation itself is linear, in which case this is trivial (adding and multiplying with constants). This discussion here points into a similar direction - backtransforming betas is infeasible in most/many cases.

    What to do instead?

    If interpretation is your goal, I believe the closest you can get to what you wish to achieve is to use predicted values where you vary the regressors or the coefficients. So, let me give you an example: if your goal is to say what's the effect of one standard error higher for sepal length on the untransformed outcome, you can create the predicted values as fitted as well as the predicted values for the 1-sigma scenario (either by tempering with the coefficient, or by tempering with the corresponding column in X).

    Example:
    # Toy example to add one sigma to sepal length coefficient
    coeffs = ols_trans.params.copy()
    coeffs['sepal length (cm)'] +=  0.018 # this is one sigma
    
    
    # function to predict and translate predictions back:
    def get_predicted_backtransformed(coeffs, data, inv_func):
        return inv_func(data.dot(coeffs))
    
    # get standard predicted values, backtransformed:
    original = get_predicted_backtransformed(ols_trans.params, X, np.expm1)
    # get counterfactual predicted values, backtransformed:
    variant1 = get_predicted_backtransformed(coeffs, X, np.expm1)
    

    Then you can say e.g. about the mean shift in the untransformed outcome:

    variant1.mean()-original.mean()
    #0.2523083548367202