Search code examples
pythonregressionleast-squares

regression OLS in python


I have some questions about multiple regression models in python:

  1. Why is it necessary to apply a “dummy intercept” vector of ones to start for the Least Square method (OLS)? (I am refering to the use of X = sm.add_constant(X). I know that the Least square method is a system of derivatives equal to zero. Is it computed with some iterative method that make a “dummy intercept” necessary? Where can I find some informative material about the detail of the algorithm est = sm.OLS(y, X).fit()?

  2. As far as I understood, scale.fit_transform produce a normalization of the data. Usually a normalization do not produce value higher than 1. Why, once scaled I see value that exceed 1?

  3. Where is it possible to find a official documentation about python functions?

Thanks in advance


Solution

    1. In the OLS the function you are trying to fit is : y=ax1+ax2+ax3+c. if you don't use c term, your line will always pass through the origin. Hence to give more degrees of freedom to your line which can be offset by c from your origin you need c .

    You can fit a line without constant term and you will get set of coefficients (dummy intercept is not necessary for iterative computation), but that might not be the best possible straight line which minimises the least square.