Search code examples
pythonvectorpytorch

What is the correct way to calculate the norm, 1-norm, and 2-norm of vectors in PyTorch?


I have a matrix:

t = torch.rand(2,3)
print(t)
>>>tensor([[0.5164, 0.3651, 0.0882],
        [0.4488, 0.9824, 0.4067]])

I'm following this introduction to norms and want to try it in PyTorch.

It seems like the:

  • norm of a vector is "the size or length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm"
  • 1-Norm is "the sum of the absolute vector values, where the absolute value of a scalar uses the notation |a1|. In effect, the norm is a calculation of the Manhattan distance from the origin of the vector space."
  • 2-Norm is "the distance of the vector coordinate from the origin of the vector space. The L2 norm is calculated as the square root of the sum of the squared vector values."

I currently only know of this:

print(torch.linalg.norm(t, dim=1))
>>>tensor([0.6385, 1.1541])

But I can't figure out which one of the three (norm, 1-norm, 2-norm) from here it is calculating, and how to calculate the rest


Solution

  • To compute the 0-, 1-, and 2-norm you can either use torch.linalg.norm, providing the ord argument (0, 1, and 2 respectively). Or directly on the tensor: Tensor.norm, with the p argument. Here are the three variants: manually computed, with torch.linalg.norm, and with Tensor.norm.

    • 0-norm

      >>> x.norm(dim=1, p=0)
      >>> torch.linalg.norm(x, dim=1, ord=0)
      >>> x.ne(0).sum(dim=1)
      
    • 1-norm

      >>> x.norm(dim=1, p=1)
      >>> torch.linalg.norm(x, dim=1, ord=1)
      >>> x.abs().sum(dim=1)
      
    • 2-norm

      >>> x.norm(dim=1, p=2)
      >>> torch.linalg.norm(x, dim=1, ord=2)
      >>> x.pow(2).sum(dim=1).sqrt()