I am trying to perform deconvolution on an image, I which is n x m. The kernel used to do the convolution on it is K which is also n x m. Now I want to find the original image, O, by performing a deconvolution. I know that I can retrieve the image O by performing a Fourier Transform on I and K and dividing: I / K, since in Fourier domain the convolution is a product. (I got this information from here).
I saw another post on how to use Eigen FFT to perform a forward transform, here.
My code for forward Fourier transform is:
I = Input Image (time domain)
O = Output Image (frequency domain)
tempFreq = temporary matrix for calculations (frequency domain)
timevec1 = float vector
freqvec1, freqvec2, freqvec3 = complex vector
for (Int32 i = 0; i < I->uRowsCount; ++i)
{
for (Int32 j = 0; j < I->uColumnsCount; ++j)
{
timevec1.push_back((*I)(i, j));
}
fft.fwd(freqvec1, timevec1);
for (Int32 j = 0; j < I->uColumnsCount; ++j)
{
(tempFreq)(i, j) = freqvec1[j];
}
freqvec1.clear();
timevec1.clear();
}
freqvec1.clear();
timevec1.clear();
for (Int32 j = 0; j < I->uColumnsCount; ++j)
{
for (Int32 i = 0; i < I->uRowsCount; ++i)
{
freqvec2.push_back((tempFreq)(i, j));
}
fft.fwd(freqvec1, freqvec2);
for (Int32 i = 0; i < I->uRowsCount; ++i)
{
(O)(i, j) = freqvec1[i];
}
freqvec2.clear();
freqvec1.clear();
}
My code for inverse Fourier transform is:
I = Input Image (frequency domain)
O = Output Image (time domain)
tempTime = temporary matrix for calculations (time domain)
timevec1, timevec2 = float vector
freqvec1, freqvec2 = complex vector
for (Int32 j = 0; j < O->uColumnsCount; ++j)
{
for (Int32 i = 0; i < O->uRowsCount; ++i)
{
freqvec1.push_back((I)(i, j));
}
fft.inv(timevec1, freqvec1);
for (Int32 i = 0; i < O->uRowsCount; ++i)
{
(*tempTime)(i, j) = timevecCol[i];
}
freqvec1.clear();
timevec1.clear();
}
freqvec1.clear();
timevec1.clear();
for (Int32 i = 0; i < O->uRowsCount; ++i)
{
for (Int32 j = 0; j < O->uColumnsCount; ++j)
{
freqvec2.push_back((*tempTime)(i, j));
}
fft.inv(timevec2, freqvec2);
for (Int32 j = 0; j < O->uColumnsCount; ++j)
{
(*O)(i, j) = timevec2[j];
}
freqvec2.clear();
timevec2.clear();
}
For deconvolution, I am dividing the input image in frequency domain with the kernel in frequency domain:
freqDomainOutputImage = freqDomainInputImage.cwiseQuotient(freqDomainKernel);
And to get the original image, I perform the inverse Fourier transform on the freqDomainOutputImage.
I believe the FFT is mirroring the top left corner to the other sides but I don't know why? I am not using halfSpectrum. Second, why is the image shifted? If I move the output image to the center by replacing the last loop with this:
(*O)((i + O->uRowsCount/2)%O->uRowsCount, (j + O->uColumnsCount/2)%O->uColumnsCount) = timevec2[j];
(You can see that the image is mirrored from the top left quadrant).
And last, why does it seem to have noise even though I added the blur myself without noise?
tempTime
needs to be complex. You seem to be throwing away that imaginary component, hence you get the mirroring effect.
Starting from a complex frequency-domain image, with a complex-conjugate symmetry, you apply a 1D DFT (e.g. along rows). The result of this transformation is a complex image with a complex-conjugate symmetry only along columns. The next 1D DFT takes these columns and creates real-values signals out of them, yielding a real-valued image.
If after the first step you discard the imaginary component, then you remove the odd component of the spatial-domain image along that dimension, leaving a even (symmetric) image. This is why you see this symmetry in your result.
Think of it this way: the inverse DFT reverses the process of the DFT. If the DFT goes real->complex->complex
, then the inverse must be complex->complex->real
. The intermediate image has a spatial and a frequency dimension. The frequency components always need to be complex.
You already figured out that you need to swap quadrants. This is because the DFT uses the left-most pixel as the origin, both in the spatial domain and in the frequency domain. Thus, your spatial-domain convolution kernel must have its origin in the top-left corner for all of this to work correctly.
Finally, you wonder about the noise. This is caused by numerical instability. Some frequency components of your image simply have very small values. This is especially true for the low-pass filtered image. There you divide very small values by other very small values, yielding nonsense.
Deconvolution always needs regularization in practice. The Wiener filter is the simplest way to add regularization to the deconvolution. See this other answer for details.