Given the following code:
import tensorflow as tf
normal_dist = tf.contrib.distributions.Normal(.5, 1.3)
foo = normal_dist.sample()
bar = normal_dist.sample()
baz = foo + bar
sess = tf.Session()
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter("./logs", graph=tf.get_default_graph())
the normal distribution object is duplicated twice for a total of three objects which is bad, because as you can see they are all the same distribution (same mean,std).
Is there a way to not duplicate? Or a way to optimize? Looking for best practices.
The sample()
method creates a new tensor, which receives random values from the distribution. Under the hood, Normal
uses tf.random_normal
op, which itself also creates a new node in graph upon calling.
If don't want to create new ops each time, you can simply evaluate the same random tensor multiple times:
...
with tf.Session() as sess:
print(sess.run(foo))
print(sess.run(foo))
print(sess.run(foo))
... this will output a different random value each time.
By the way, note that Normal_1
and Normal_2
on the tensorboard picture are not objects, but the names scopes that contain the ops to calculate the value (you can expand and zoom in to see that). The bottom Normal
is also a scope that contains some common tensors for foo
and bar
.