Search code examples
machine-learningneural-networktensorflowconv-neural-networktensorboard

Why does TensorFlow create extra name spaces for my variables in the TensorBoard visualization?


I create variables as follows:

x = tf.placeholder(tf.float32, shape=[None, D], name='x-input') # M x D
# Variables Layer1
#std = 1.5*np.pi
std = 0.1
W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std, name='W1') ) # (D x D1)
S1 = tf.Variable(tf.constant(100.0, shape=[1], name='S1')) # (1 x 1)
C1 = tf.Variable( tf.truncated_normal([D1,1], mean=0.0, stddev=0.1, name='C1') ) # (D1 x 1)

but for some reason tensorflow adds extra variable blocks in my visualization:

enter image description here

Why is it doing this and how do I stop it?


Solution

  • You are incorrectly using names in TF

    W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std, name='W1') )
                      \----------------------------------------------------------/
                                               initializer 
         \-------------------------------------------------------------------------/
                                     actual variable
    

    Thus your code creates unnamed variable, and names initializer op W1. This is why what you see in the graph named W1 is not your W1 but rather renamed initializer, and what should be your W1 is called Variable (as this is the default name TF assigns to unnamed ops). It should be

    W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std), name='W1' )
    

    Which will create node named W1 for actual variable, and it will have a small initialization node attached (which is used to seed it random values).