Search code examples
c#fftspectrogramalglib

Math.Net and alglib returning different FFT outputs by default


I am developing an application in C# with spectrogram drawing functionality.

For my fist try, I used MathNet.Numerics, and now I am continuing to develop with alglib. When I changed from one to the other, I noticed that the output differs for them. Mathnet uses some kind of correction by default, which alglib seems to omit. I am not really into signal processing, also a newbie to programming, and I could not figure out what the difference exactly comes from.

MathNet default output (raw magnitude) values are ranging from ~0.1 to ~274 in my case. And with alglib I get values ranging from ~0.2 to ~6220.

I found that MathNet Fourier.Forward uses a default scaling option. Here is says, the FourierOptions.Default is "Universal; Symmetric scaling and common exponent (used in Maple)." https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/FourierOptions.htm If I use FourierOptions.NoScaling, the output is the same as what alglib produces.

In MathNet, I used Fourier.Forward function: https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/Fourier.htm#Forward In case of alglib, I used fftr1d function: https://www.alglib.net/translator/man/manual.csharp.html#sub_fftr1d

  • What is that difference in their calculation?
  • What is the function that I could maybe use to convert alglib output magnitude to that of MathNet, or vice versa?
  • In what cases should I use these different "scalings"? What are they for exactly?

Please share your knowledge. Thanks in advance!


Solution

  • I worked it out by myself, after reading a bunch of posts mentioning different methods of FFT output scaling. I still find this aspect of FFT processing heavily unsdocumented everywhere. I have not yet found any reliable source that explains what is the use of these scalings, which fields of sciences or what processing methods use them.

    I have yet found out three different kinds of scalings, regarding the raw FFT output (complexes' magnitudes). This means multiplying them by: 1. 1/numSamples 2. 2/numSamples 3. 1/sqrt(numSamples) 4. (no scaling)

    MathNet.IntegralTransforms.Fourier.Forward function (and according to various posts on the net, also possibly Matlab and Maple) by default, uses the third one. Which results in the better distinguishable graphical output when using logarithmic colouring, in my opinion.

    I would still be grateful if you know something and share your ideas, or if you can reference a good paper explaining on these.