Search code examples
pythonmachine-learninglinear-regressiondata-science

Gradient Descent is not converging for very large values in a small dataset


I am trying to write a program to calculate the slope and the intercept of a linear regression model but when I am running more than 10 iterations, the gradient descent function gives the np.nan value for both intercept as well as slope.

Below is my implementation

def get_gradient_at_b(x, y, b, m):
  N = len(x)
  diff = 0
  for i in range(N):
    x_val = x[i]
    y_val = y[i]
    diff += (y_val - ((m * x_val) + b))
  b_gradient = -(2/N) * diff  
  return b_gradient

def get_gradient_at_m(x, y, b, m):
  N = len(x)
  diff = 0
  for i in range(N):
      x_val = x[i]
      y_val = y[i]
      diff += x_val * (y_val - ((m * x_val) + b))
  m_gradient = -(2/N) * diff  
  return m_gradient

def step_gradient(b_current, m_current, x, y, learning_rate):
    b_gradient = get_gradient_at_b(x, y, b_current, m_current)
    m_gradient = get_gradient_at_m(x, y, b_current, m_current)
    b = b_current - (learning_rate * b_gradient)
    m = m_current - (learning_rate * m_gradient)
    return [b, m]

def gradient_descent(x, y, learning_rate, num_iterations):
  b = 0
  m = 0
  for i in range(num_iterations):
    b, m = step_gradient(b, m, x, y, learning_rate)
  return [b,m]  

I am running it on the following data:

a=[3.87656018e+11, 4.10320300e+11, 4.15730874e+11, 4.52699998e+11,
       4.62146799e+11, 4.78965491e+11, 5.08068952e+11, 5.99592902e+11,
       6.99688853e+11, 8.08901077e+11, 9.20316530e+11, 1.20111177e+12,
       1.18695276e+12, 1.32394030e+12, 1.65661707e+12, 1.82304993e+12,
       1.82763786e+12, 1.85672212e+12, 2.03912745e+12, 2.10239081e+12,
       2.27422971e+12, 2.60081824e+12]
b=[3.3469950e+10, 3.4784980e+10, 3.3218720e+10, 3.6822490e+10,
       4.4560290e+10, 4.3826720e+10, 5.2719430e+10, 6.3842550e+10,
       8.3535940e+10, 1.0309053e+11, 1.2641405e+11, 1.6313218e+11,
       1.8529536e+11, 1.7875143e+11, 2.4981555e+11, 3.0596392e+11,
       3.0040058e+11, 3.1440530e+11, 3.1033848e+11, 2.6229109e+11,
       2.7585243e+11, 3.0352616e+11]

print(gradient_descent(a, b, 0.01, 100))
#result --> [nan, nan]

When I run the gradient_descent function on a dataset with smaller values, it gives the correct answers. Also I was able to obtain the intercept and slope for the above data with from sklearn.linear_model import LinearRegression

Any help will be appreciated in figuring out why the result is [nan, nan] instead of giving me the correct intercept and slope.


Solution

  • You need to reduce the learning rate. Since the values in a and b are so large (>= 1e11), the learning rate needs be approximately 1e-25 for this to even do the gradient descent, else it will randomly overshoot because of large gradients of a and b.

    b, m = gradient_descent(a, b, 5e-25, 100)
    print(b, m)
    Out: -3.7387067636195266e-13 0.13854551291084335