Search code examples
pythonnumpyjupytercomplex-numbersexp

Numpy exp() function gives me wrong value for no reason


import matplotlib.pyplot as plt

#from numpy.fft import fft as numpyfft

#from scipy.fftpack import fft as scipyfft

import numpy as np

    
    print(np.exp([-2j*np.pi]))

    print(np.exp([-2.j*np.pi]))

    print(np.exp(-2j*np.pi))

    print(np.exp(-2.j*np.pi))

    [1.+2.4492936e-16j]
    [1.+2.4492936e-16j]
    (1+2.4492935982947064e-16j)
    (1+2.4492935982947064e-16j)

So i made a presentation about the topic of FFT the other day. Therefor i made a jupyternoebook with python 3. I took the code examples directly from here:

https://pythonnumericalmethods.berkeley.edu/notebooks/chapter24.04-FFT-in-Python.html

which shows a short implementation of a described FFT algorithm. What is important is, that they use the numpy.exp function, which is shown in my code.

I wanted to write some explanation about the functions used in the algorithm, and made some print comparing the direct numpyfft, scipyfft and the FFT implemented from the link. There were huge rounding-like errors. So i looked deepr into it, and found out, that on my jupyternotebook, exp(-2j*PI) = wrong value. It should be exp(-2j*PI) = 1

So its not about the FFT algorithm but the wrong values i get see code print above. I searched for a bit, but none of the threads seemed to help. Some suggested casting to float when dividing, but in -2j*PI we dont divide and there is no need to cast?!

So yey i'm totally clueless. Sorry if its a basic mistake of mine, but i'm stuck for hours now and want to be able to explain it properly with correct results.

(The FFT algorithm of numpy and scipy produce correct results, but the FFT from the link obviously not)


Solution

  • This is the correct answer. Numpy like all computers calculating in binary has a small margin of error. You can circumvent this by rounding to 14 digits (for 32-bit floating point numbers):

    print(np.round(np.exp(-2j*np.pi),14))
    # (1+0j)