I have a list of precomputed and stored Python lambda functions, e.g.:
fn_list = [lambda x: x + 2 for x in some_data]
Now, I would like that for a Tensor:
X = tf.placeholder(..., shape=[None, 1])
to apply each of the functions in fn_list
to each of the values in the None
dimension of X
.
The output would ideally have the same shape as the X
by using tf.reduce_sum
to squash all the results from fn_list
for each of the values in X
.
Below is a sample of code that shows what I'd want from the problem above:
import numpy as np
import tensorflow as tf
def _get_gaussian(loc=0.0, scale=1.0):
return lambda x: (1/scale * np.sqrt(2 * np.pi)) * np.exp(-0.5 * np.power((x-loc)/scale, 2))
data = np.random.normal(3, 1.25, 100)
fn_list = [_get_gaussian(loc=x, scale=0.2) for x in data]
X = tf.placeholder(tf.float32, shape=[None, 1])
pred = tf.map_fn(lambda x: tf.map_fn(lambda fn_: tf.reduce_sum(fn_(x)), fn_list), X)
The code above would basically perform some sort of Kernel Density Estimation by first computing a normal distribution around each of the points in data
, storing the kernels in fn_list
.
Then, for a certain Tensor X
, it could compute the likelihood of each of the values in X
.
pred
would now be a Tensor of shape X.shape
.
Is something like this even possible?
My brain hurts a bit trying to follow this :) but basically, if you break out the last line like this:
pred = tf.map_fn( lambda x: tf.map_fn(lambda fn_: tf.reduce_sum(fn_(x)),
fn_list),
X)
then your issue is now clearer. Namely that the inner tf.map_fn
is trying to map the function tf.reduce_sum( fn_( x ) )
onto fn_list
, but the latter is a list of functions not a tensor. Furthermore, the outer tf.map_fn
is trying to map the result of the inner map onto X, but that result is not a function but a tensor. So some rethinking and restructuring of this code is in order, no mapping, Gaussian expressed as Tensorflow operations, maybe like this:
import numpy as np
import tensorflow as tf
data = np.random.normal(3, 1.25, 5)
#X = tf.placeholder(tf.float32, shape=[None, 1])
X = tf.constant( [ 1, 2, 3 ], dtype = tf.float32 )
scale = 0.2
coeff = np.sqrt(2 * np.pi) / scale
X_fn = []
for idx in xrange( len ( data ) ):
x = data[ idx ]
X_fn.append(
coeff * tf.exp( - 0.5 * tf.pow( ( X - x ) / scale, 2 ) )
)
X1 = tf.stack( X_fn, axis = 0 )
X2 = tf.reduce_sum( X1, axis = 1 )
with tf.Session() as sess:
print( sess.run( X2 ) )
Output:
[3.7835757e-09 9.0623178e+00 9.2027439e-03 2.7527404e-01 1.1571876e+00]
The code above works but I'm not completely sure it does what you want exactly. I still hope it helps.