Search code examples
pythonpytorchrecurrent-neural-networkbackpropagation

Can I use pytorch .backward function without having created the input forward tensors first?


I have been trying to understand RNNs better and am creating an RNN from scratch myself using numpy. I am at the point where I have calculated a Loss but it was suggested to me that rather than do the gradient descent and weight matrix updates myself, I use pytorch .backward function. I started to read some of the documentation and posts here about how it works and it seems like it will calculate the gradients where a torch tensor has requires_grad=True in the function call.

So it seems that unless create a torch tensor, I am not able to use the .backward. When I try to do this on the loss scalar, I get a 'numpy.float64' object has no attribute 'backward' error. I just wanted to confirm. Thank you!


Solution

  • Yes, this will only work on PyTorch Tensors. If the tensors are on CPU, they are basically numpy arrays wrapped into PyTorch Tensors API (i.e., running .numpy() on such a tensor returns exactly the data, it can modified etc.)