In my TensorFlow code I have connected several parameters to some logic in my graph, but the corresponding TensorBoard visualization fails to establish these connections directly, and instead only indicates the connections between the containing scopes.
Specifically, I have
with tf.name_scope('params_structure'):
is_train = tf.placeholder(tf.bool, [], name='is_train')
keep_prob_later_param = tf.identity(FLAGS.keep_prob_later, name='keep_prob_later')
keep_prob_early_param = tf.identity(FLAGS.keep_prob_early, name='keep_prob_early')
keep_prob_input_param = tf.identity(FLAGS.keep_prob_input, name='keep_prob_input')
with tf.name_scope('structure_logic'):
# Note that the summaries for these variables are the values used in training; not for computing stats
with tf.name_scope('keep_prob_later_decay'):
keep_prob_later_decay = tf.sub(1.0, tf.train.exponential_decay(1 - keep_prob_later_param, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
with tf.name_scope('keep_prob_early_decay'):
keep_prob_early_decay = tf.sub(1.0, tf.train.exponential_decay(1 - keep_prob_early_param, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
with tf.name_scope('keep_prob_input_decay'):
keep_prob_input_decay = tf.sub(1.0, tf.train.exponential_decay(1 - keep_prob_input_param, global_step,
FLAGS.decay_steps,
FLAGS.dropout_decay_rate, staircase=False))
with tf.name_scope('keep_prob_all'):
keep_prob_all = tf.identity(1.0)
keep_prob_later = tf.cond(is_train, lambda: keep_prob_later_decay, lambda: keep_prob_all)
keep_prob_early = tf.cond(is_train, lambda: keep_prob_early_decay, lambda: keep_prob_all)
keep_prob_input = tf.cond(is_train, lambda: keep_prob_input_decay, lambda: keep_prob_all)
In my TensorBoard visualization I see all of these elements as expected, but the connections between the keep_prob_..._param
s and the corresponding keep_prob_..._decay
operations are not established. Instead I get only the connections between the containing scopes as a group (e.g. from params_structure
as highlighted below, to all of the keep_prob_..._decay
operations):
The same is true of the connection of is_train
into the conditional operations: only the entire containing scope (highlighted above) is connected.
How do I ensure that my connections among my graph elements, and not just their enclosing scopes, are represented in TensorBoard?
Note that this isn't just an issue of compulsive completeness: as it stands, the TensorBoard representation completely fails to establish which of the params_structure
elements connect to which of the structure_logic
elements: it could be any, all, or even none of them!
TensorBoard has to make choices of representation, because displaying all the real connections would be unreadable. That is why name scopes are so useful: you can have a view of the whole graph, and then zoom in on elements of your interest.
However, as you say, with name scopes TensorBoard will display one big connection between the two boxes param_structures
and structure_logic
(9 tensors are in this connection).
The TensorBoard representation completely fails to establish which of the params_structure elements connect to which of the structure_logic elements: it could be any, all, or even none of them!
This is wrong, all the informations of the Graph are represented.
Although not displayed graphically, the connection between params_structure/keep_prob_later
and structure_logic/keep_prob_later_decay
is written when you click on the node params_structure/keep_prob_later
and see the box in the top right.
In the category "Outputs", you can see the node structure_logic/keep_prob_later_decay
.
If you really want to see the connection, you should put the node keep_prob_later
in the name scope structure_logic/keep_prob_later_decay
.
PS:
Note that this isn't just an issue of compulsive completeness.
That one made me laugh :)