I have a problem with Tensorflow:
The following code produces a correct(ish) graph for a convolutional block:
def conv_layer(self, inputs, filter_size = 3, num_filters = 256, name = None):
scope_name = name
if name == None:
scope_name = "conv_layer"
with tf.name_scope(scope_name):
conv = tf.contrib.layers.conv2d(inputs, num_filters, filter_size, activation_fn = None)
batch_norm = tf.contrib.layers.batch_norm(conv)
act = tf.nn.leaky_relu(batch_norm)
return act
The problem is that the tf.layers
API makes some ugly variables that do not actually stay within the name_scope
. Here is the Tensorboard view so you can see what I mean.
Is there anyway to get those variables to go into the scope? This is a big problem when it comes to visualizing the graph because I plan this network to much larger. (As you can see to the right, this is already a big problem, I have to remove those from the main graph manually every time I boot up Tensorboard.)
You can try using tf.variable_scope
instead. tf.name_scope
is ignored by variables created via tf.get_variable()
which is usually used by tf.layers
functions. This is in contrast to variables created via tf.Variable
.
See this question for an (albeit somewhat outdated) explanation of the differences.