Search code examples
pythonmachine-learningregressionlinear-regressiongradient-descent

Linear Regression with one variable


While implementing Gradient Descent Algorithm in linear regression, the prediction that my algorithm is making and the resulting regression line are coming as a wrong output. Could anyone please have a look at my implementation and help me out? Also, please guide me that how can I know what value of "learning rate" and "number of iterations" to choose in specific regression problem?

theta0 = 0                               #first parameter
theta1 = 0                               #second parameter
alpha = 0.001                             #learning rate (denoted by alpha)
num_of_iterations = 100                #total number of iterations performed by Gradient Descent
m = float(len(X))                         #total number of training examples

for i in range(num_of_iterations):
    y_predicted = theta0 + theta1 * X
    derivative_theta0 = (1/m) * sum(y_predicted - Y)
    derivative_theta1 = (1/m) * sum(X * (y_predicted - Y))
    temp0 = theta0 - alpha * derivative_theta0
    temp1 = theta1 - alpha * derivative_theta1
    theta0 = temp0
    theta1 = temp1
print(theta0, theta1)

y_predicted = theta0 + theta1 * X
plt.scatter(X,Y)
plt.plot(X, y_predicted, color = 'red')
plt.show()

Resulting regression line about which I need some help


Solution

  • Your learning rate is to high, I got it working by reducing the learning rate to alpha = 0.0001.