According to the documentation normalize is supposed to do (tensor - mean)/std
, but it doesn't. Why?
Docs:
Normalize a tensor image with mean and standard deviation. Given mean:
(mean[1],...,mean[n])
and std:(std[1],..,std[n])
forn
channels, this transform will normalize each channel of the inputtorch.*Tensor
i.e.,output[channel] = (input[channel] - mean[channel]) / std[channel]
a = T.Tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]])
m = a.mean()
std = a.std()
print((m, std))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).mean())
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).std())
a = (a - m)/std
m = a.mean()
std = a.std()
print((m, std))
Output:
(tensor(5.), tensor(2.7386))
tensor([[[[1.0150, 1.3802, 1.7453],
[2.1105, 2.4756, 2.8408],
[3.2059, 3.5711, 3.9362]]]])
tensor(2.4756)
tensor(1.0000)
(tensor(0.), tensor(1.0000))
The std is correct, but the mean is something random. What gives?
Mean value of your tensor is stored in variable m
, not mean
.
After replace m
with mean
on Line 2
a = T.Tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]])
mean = a.mean()
std = a.std()
print((mean, std))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).mean())
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).std())
a = (a - mean)/std
mean = a.mean()
std = a.std()
print((mean, std))
Output:
(tensor(5.), tensor(2.7386))
tensor([[[[-1.4606, -1.0954, -0.7303],
[-0.3651, 0.0000, 0.3651],
[ 0.7303, 1.0954, 1.4606]]]])
tensor(0.)
tensor(1.0000)
(tensor(0.), tensor(1.0000))