I would like to replace the torch.norm
function using the other Pytorch function.
I was able to replace torch.norm
in the case where x
is not a matrix, as shown in the following code.
import torch
x = torch.randn(9)
out1 = torch.norm(x)
out2 = sum(abs(x)**2)**(1./2)
out1 == out2
>> tensor(True)
But I don't know how to replace it when x is a matrix.
Especially, I want to replace it in my case of dim=1 and keepdim=True
.
x = torch.randn([3, 136, 64, 64])
out1 = torch.norm(x, dim=1, keepdim=True)
out2 = ???
out1 == out2
Background:
I'm converting a Pytorch model to CoreML, but the _VF.frobenius_norm
operator defined in the torch.norm
function is not implemented with CoreMLTools.
(The implementation inside torch.norm
can be found here.)
A few people have trouble with this problem, but CoreMLTools is still unsupported (You can check from this issue).
So I'd like to replace it without the operator used in torch.norm
.
I have tried torch.linalg.norm()
and numpy.linalg.norm
but they were not supported.
I have created a simple colaboratory notebook that reproduces this. Please test it using the following colab. https://colab.research.google.com/drive/11o6rTxHzEgZ_Rc7nFZHd3TvPugybB88h?usp=sharing
You could try the following:
import torch
x = torch.randn([3, 136, 64, 64])
out1 = torch.norm(x, dim=1, keepdim=True)
out2 = torch.square(x).sum(dim=1, keepdim=True).sqrt()
Note that out1 == out2
won't give exactly all True
due to small errors in precision. You can check that the errors are in the order of 1e-7
for float32
.
Here, the norm is computed directly using its mathematical definition. You can see this reference from Wolfram MathWorld for more details.