I'm confused about how to get my TensorBoard graph visualization to capture the fact that I'm feeding computed values to some of my placeholders.
I have defined placeholders
with tf.name_scope('params'):
keep_prob_later = tf.placeholder(tf.float32, name='keep_prob_later')
keep_prob_early = tf.placeholder(tf.float32, name='keep_prob_early')
keep_prob_input = tf.placeholder(tf.float32, name='keep_prob_input')
and corresponding tensors for computing their values
with tf.name_scope('param_vals'):
with tf.name_scope('keep_prob_later_val'):
keep_prob_later_val = tf.sub(1.0, tf.train.exponential_decay(1 - FLAGS.keep_prob_later, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
with tf.name_scope('keep_prob_early_val'):
keep_prob_early_val = tf.sub(1.0, tf.train.exponential_decay(1 - FLAGS.keep_prob_early, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
with tf.name_scope('keep_prob_input_val'):
keep_prob_input_val = tf.sub(1.0, tf.train.exponential_decay(1 - FLAGS.keep_prob_input, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
which I then feed when I train my model
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys,
keep_prob_later: sess.run(keep_prob_later_val),
keep_prob_early: sess.run(keep_prob_early_val),
keep_prob_input: sess.run(keep_prob_input_val)})
but my TensorBoard graph visualization does not show these "hooked up".
I see the placeholders connected correctly to the rest of my graph
and I see all of the corresponding computed values too
but the latter don't connect to the former.
Is this the expected be behavior? Is there a way to capture in the TensorBoard visualization of my graph the fact that the computed values are used to fill the corresponding placeholders?
If there's no way to connect the computed values to the graph, why show them? And why other computed values appear correctly. For example, my computed momentum values, which are defined just like the above fed dropout values
with tf.name_scope('param_vals'):
with tf.name_scope('momentum_val'):
momentum_val = tf.sub(1.0, tf.train.exponential_decay(1 - FLAGS.initial_momentum, global_step,
FLAGS.decay_steps, FLAGS.momentum_decay_rate,
staircase=False))
do show up connected to all the parts of the graph they influence.
I see the placeholders connected correctly to the rest of my graph, and I see all of the corresponding computed values too, but the latter don't connect to the former.
Is this the expected be behavior?
Indeed it is the correct behavior. Your graph is decomposed in two parts:
keep_prob_***_val
keep_prob_***
Parts 1 and 2 are not connected in the graph. When you call sess.run(keep_prob_***_val)
, you create a Python object. This object is then fed to the second part of the graph, but the graph doesn't know it comes from the first part.
Is there a way to capture in the TensorBoard visualization of my graph the fact that the computed values are used to fill the corresponding placeholders?
You can use tf.cond()
(doc) to choose between using the values computed in the first part of the graph, or the test values (like 1.
for keep_prob
):
is_train = tf.placeholder(tf.bool, [])
def when_train():
return keep_prob_late_val
def when_not_train():
return 1.
keep_prob_later = tf.cond(is_train, when_train, when_not_train)
And why other computed values appear correctly. For example, my computed momentum values, which are defined just like the above fed dropout values do show up connected to all the parts of the graph they influence.
In this case, you do not use an intermediate placeholder so the graph is fully connected !