Search code examples
pythonpytorch

Difference between Parameter vs. Tensor in PyTorch


I would like to know the difference between PyTorch Parameter and Tensor?

The existing answer is for the old PyTorch where variables are being used?


Solution

  • This is the whole idea of the Parameter class (attached) in a single image.

    enter image description here

    Since it is sub-classed from Tensor it is a Tensor.

    But there is a trick. Parameters that are inside of a module are added to the list of Module parameters. If m is your module m.parameters() will hold your parameter.

    Here is the example:

    class M(nn.Module):
        def __init__(self):
            super().__init__()
            self.weights = nn.Parameter(torch.randn(2, 2))
            self.bias = nn.Parameter(torch.zeros(2))
    
        def forward(self, x):
            return x @ self.weights + self.bias
    
    m=M()
    m.parameters()
    list(m.parameters())
    
    ---
    
    [Parameter containing:
     tensor([[ 0.5527,  0.7096],
             [-0.2345, -1.2346]], requires_grad=True), Parameter containing:
     tensor([0., 0.], requires_grad=True)]
    

    You see how the parameters will show what we defined. And if we just add a tensor inside a class, like self.t = Tensor, it will not show in the parameters list. That is literally it. Nothing fancy.