I want to use the fft in tensorflow. But I found the result is different when use the FFT function in numpy and tensorflow respectively. Especially when the size of input array is large
import tensorflow as tf
import numpy as np
aa = tf.lin_space(1.0, 10000.0, 10000)
bb = tf.lin_space(1.0, 10000.0, 10000)
dd = tf.concat([[aa],[bb]],axis = 0)
c_input = tf.complex(dd[0,:], dd[1,:])
Spec = tf.fft(c_input)
sess = tf.Session()
uuu = sess.run(Spec)
print(uuu)
aaa = np.linspace(1.0, 10000.0, 10000)
bbb = aaa + 1j*aaa
ccc = np.fft.fft(bbb)
print(ccc)
The result is
[ 11645833.000000+11645826.j -544529.875000 -6242453.5j
-913097.437500 -781089.0625j ..., 78607.218750 -108219.109375j
103245.156250 -182935.3125j 214871.765625 -790986.0625j ]
[ 50005000.00000000+50005000.j -15920493.78559075+15910493.78559076j
-7962746.10739718 +7952746.10739719j ...,
5300163.19893340 -5310163.19893345j
7952746.10739715 -7962746.10739723j
15910493.78559067-15920493.78559085j]
So, what can I do to get the same result when I use the fft function in tensorflow?? Thank you for answer
I found that the data type of the output of tf.fft is complex64. But output of np.fft.fft is complex128. Is that the key for this question? How can I solve this problem?
You're right, the difference is exactly in dtype
in tensorflow and numpy.
Tensorflow tf.fft
forces the input tensor to be tf.complex64
, most probably due to GPU op compatiblity. Numpy also hardcodes the array type for FFT. The source code is in native C, fftpack_litemodule.c
, where the type is NPY_CDOUBLE
- 128-bit, i.e. np.complex128
. See this issue for details.
So, I'm afraid there's no simple solution to match them. You can try to define the custom tensorflow op, which applies np.fft.fft
, but this would require you to evaluate the gradient manually as well. Or avoid applying FFT to large vectors, so that numerical inaccuracy won't be an issue.