When using np.fft.fft2
on images, the result is of the same size as the input. For real images, the real-to-complex FT has a symmetry where ft[i,j] == ft[-i,-j].conj()
, as explained in this answer.
For this reason, some frameworks such as PyTorch or scikit-cuda, return a FT of shape (height // 2 +1, width // 2 + 1)
.
Now, given a redundancy-free/one-sided FT, how can I use numpy index magic to map it to the full FT output by numpy?
Background: I need this for translating some numpy code.
I finally succeeded in using np.meshgrid
properly to fill in the relevant data.
We can use ranges for the entire row range and the missing part of the column range to only fill these indices with the appropriate data.
import numpy as np
np.random.seed(0)
N = 10
image = np.random.rand(N, N)
h, w = image.shape
ft = np.fft.rfft2(image)
ft_reference = np.fft.fft2(image)
ft_full = np.zeros_like(image, dtype=np.complex128)
ft_full[:ft.shape[0], :ft.shape[1]] = ft
X, Y = np.meshgrid(range(h), range(w // 2 + 1, w), indexing='ij')
ft_full[X, Y] = ft_full[-X, -Y].conj()
print(np.allclose(ft_full, ft_reference))