Search code examples
c++fft

Real input to FFT then iFFT back to real data in 1-D Array


I'm trying to implement FFT and iFFT using Intel MKL. The FFT on the real input works fine and the output array I get back is correct. On using the same output array for an iFFT the values that are returned are all multiplied by the size of the array and this holds true for all sizes. I'm unable to understand why this is happening. The code is as follows:

size_t size;
std::cout << "enter the size of input and output array: ";
std::cin >> size;
float *inp = NULL;
MKL_Complex8 *op = NULL;
MKL_LONG status;

 
// Allocate memory for the arrays.
inp = (float*) mkl_malloc(size*sizeof(float), 64);
op = (MKL_Complex8*) mkl_malloc((size/2+1)*sizeof(MKL_Complex8), 64);
//Initialize values of input
for (size_t i = 0; i < size; i++){
    if(i < 8)
        inp[i] = -std::rand()%10;
    else if(i ==8)
        inp[i] = 0;
    else
        inp[i] = std::rand()%10;
}

std::cout << "The input array is: " << std::endl;
for (size_t i = 0; i < size; i++){
    std::cout << inp[i] << ", ";
}
std::cout << std::endl;

//Specify descriptor values.

DFTI_DESCRIPTOR_HANDLE hand = NULL;

status = DftiCreateDescriptor(&hand, DFTI_SINGLE, DFTI_REAL, 1, (MKL_LONG) size);
status = DftiSetValue(hand, DFTI_PLACEMENT, DFTI_NOT_INPLACE);
status = DftiSetValue(hand, DFTI_CONJUGATE_EVEN_STORAGE, DFTI_COMPLEX_COMPLEX);
status = DftiSetValue(hand, DFTI_PACKED_FORMAT, DFTI_CCE_FORMAT);
status = DftiCommitDescriptor(hand);
status = DftiComputeForward(hand, inp, op);


status = DftiComputeBackward(hand, op, inp);

std::cout << "\nThe output of iFFT is: " << std::endl;
for (size_t i = 0; i < size; i++){
    std::cout << inp[i] << ", ";
}
std::cout << std::endl;

DftiFreeDescriptor(&hand);
free(inp);
free(op);

return 0;
}

A crude workaround would be to divide individual array elements by the size after iFFT, but that fails to explain why this is happening in the first place. It would be much better to understand the actual cause and rectify that. This is a trial to implement these transforms before the actual code and I would really like to understand why this is happening. Thanks.


Solution

  • There exist different conventions for definition of FFT/IFFT, with different normalisation factors: 1/N normalisation at FFT side, or 1/N normalisation at IFFT side, or 1/sqrt(N) normalisation at both sides, or no normalisation at all.

    With no normalisation factor, after FFT/IFFT, you get a multiplication by the FFT size effectively.

    As Cris Luengo mentioned in a comment, it is likely that MKL made the choice to provide building blocks with no normalisation in order not to be penalized by normalising twice.

    In context of Intel MKL, there exist 2 parameters DFTI_FORWARD_SCALE and DFTI_BACKWARD_SCALE which can be set using DftiSetValue(). For one-dimensional transform, you can use forward transform scale as 1 and backward transform scale as 1/n, where 'n' is the size of 1-D transform. (for more information refer to Intel documentation for MKL)