Search code examples
deep-learningpytorchgradientgradient-descentfast-ai

What does a.sub_(lr*a.grad) actually do?


I am doing the course of fast-ai, SGD and I can not understand.....

This subtracts the coefficients by (learning rate * gradient)...

But why is it necessary to subtract?

here is the code:

def update(): 
  y_hat = x@a  
  loss = mse(y_hat, y) 
  if t % 10 == 0: print (loss)
  loss.backward() 
  with torch.no_grad(): 
    a.sub_(lr * a.grad) 

Solution

  • enter image description here

    Look at the image. It shows the loss function J as a function of the parameter W. Here it is a simplified representation with W being the only parameter. So, for a convex loss function, the curve looks as shown.

    Note that the learning rate is positive. On the left side, the gradient (slope of the line tangent to the curve at that point) is negative, so the product of the learning rate and gradient is negative. Thus, subtracting the product from W will actually increase W (since 2 negatives make a positive). In this case, this is good because loss decreases.

    On the other hand (on the right side), the gradient is positive, so the product of the learning rate and gradient is positive. Thus, subtracting the product from W reduces W. In this case also, this is good because the loss decreases.

    We can extend this same thing for more number of parameters (the graph shown will be higher dimensional and won't be easy to visualize, which is why we had taken a single parameter W initially) and for other loss functions (even non-convex ones, though it won't always converge to the global minima, but definitely to the nearest local minima).

    Note : This explanation can be found in Andrew Ng's courses of deeplearning.ai, but I couldn't find a direct link, so I wrote this answer.