Search code examples
pythonregressiongradient-descent

Error in Ridge Regression Gradient Descent (Python)


I am currently trying to create a code for Ridge Regression using Gradient Descent.

My code goes like this:

def gd_ridge(X, Y, beta, iter, learning_rate, lambda):
    m = X.shape[0]
    
    past_costs = []
    past_betas = [beta]
    
    for i in range(iter):
        pred = np.dot(X, beta)
        err = pred - Y
        cost = cost_reg(Yhat1, train_data_Y, lambda)
        past_costs.append(cost)
        beta = beta - learning_rate * 1/m * np.dot(f(X,beta)-Y,X) + lambda/m * beta
        past_betas.append(beta)

with l = regularisation hyperparameter.

But I always end up with an error in my beta:

can't multiply sequence by non-int of type 'float'

Can anyone help me with this? I've tried different equations and I end up with the same error.


Solution

  • Very likely you're not passing in numpy arrays, but a list instead. Pass in numpy arrays or do beta = np.array(beta) in your function.