Search code examples
pythonmathfftnumerical-methodsnumerical-integration

Efficiently compute inverse Fourier transform


The following code was suggested in the very nice answer to this question here.

My question about it is: If I was just to compute the derivative in Fourier space as done in this code by evaluating np.dot( B, yl ). How would I be able to recover the actual derivative in real space by applying an inverse Fourier transform?

import numpy as np

## non-normalized gaussian with sigma=1
def gauss( x ):
    return np.exp( -x**2 / 2 )

## interval on which the gaussian is evaluated
L = 10
## number of sampling points
N = 21
## sample rate
dl = L / N
## highest frequency detectable
kmax= 1 / ( 2 * dl )

## array of x values
xl = np.linspace( -L/2, L/2, N )
## array of k values
kl = np.linspace( -kmax, kmax, N )

## matrix of exponents
## the Fourier transform is defined via sum f * exp( -2 pi j k x)
## i.e. the 2 pi is in the exponent
## normalization is sqrt(N) where n is the number of sampling points
## this definition makes it forward-backward symmetric
## "outer" also exists in Matlab and basically does the same
exponent = np.outer( -1j * 2 * np.pi * kl, xl ) 
## linear operator for the standard Fourier transformation
A = np.exp( exponent ) / np.sqrt( N )

## nth derivative is given via partial integration as  ( 2 pi j k)^n f(k)
## every row needs to be multiplied by the according k
B = np.array( [ 1j * 2 * np.pi * kk * An for kk, An in zip( kl, A ) ] )

## for the part with the linear term, every column needs to be multiplied
## by the according x or--as here---every row is multiplied element 
## wise with the x-vector
C = np.array( [ xl * An for An in  A ] )

## thats the according linear operator
D = B + C

## the gaussian
yl = gauss( xl )

## the transformation with the linear operator
print(  np.dot( D, yl ).round( decimals=9 ) ) 
## ...results in a zero-vector, as expected

Solution

  • Here one only needs to define the linear operator for the inverse transformation. Following the structure from the posted code it would be

    expinv = np.outer( 1j * 2 * np.pi * xl, kl ) 
    AInv = np.exp( expinv ) / np.sqrt( N )
    

    The Fourier transform of the derivative was

    dfF = np.dot( B, yl )
    

    such that the derivative in real space would be

    dfR = np.dot( AInv, dfF ) 
    

    Eventually, this means that the "derivative"-operator is

    np.dot( AInv, B )
    

    which is for small N ( except for the edges ) a tridiagonal matrix with entries (-1,0,1), i.e. a classical symmetric numerical derivative. Increasing N it first changes to 1,-2,0,2,1 i.e. a higher order approximation of the derivative.

    Eventually, one gets an alternating weighted derivative of type (..., d,-c, b,-a,0,a,-b, c,-d, ...)