I'm using the tensorflow-deeplab-resnet model which transfers the Resnet model implemented in Caffe to tensorflow using caffe-tensorflow.
I'd like to know how I can access individual variables from the model that was imported from Caffe so I can check what is going wrong.
I tried
allTrainVars = tf.trainable_variables()
for f in allTrainVars:
print f.name
which outputs
[...]
res5c_branch2c/weights:0
bn5c_branch2c/scale:0
bn5c_branch2c/offset:0
bn5c_branch2c/mean:0
bn5c_branch2c/variance:0
fc1_voc12_c0/weights:0
fc1_voc12_c0/biases:0
fc1_voc12_c1/weights:0
fc1_voc12_c1/biases:0
fc1_voc12_c2/weights:0
fc1_voc12_c2/biases:0
fc1_voc12_c3/weights:0
fc1_voc12_c3/biases:
The fc1_voc12_c*
layers are the interesting layers that need to be reinitialized randomly. But when I try to access them and add a logging to the variable like this
var = [v for v in tf.trainable_variables() if v.name == "fc1_voc12_c0/weights:0"][0]
tf.summary.histogram("fc1_voc12_c0/weights_0", var)
I can't see that variable in tensorboard. The only thing that is displayed in tensorboard is the graph itself.
How can I access these variables in order to monitor them in tensorboard?
Can I infer the correct names of the variables that I'd like to monitor by just looking at the graph (see picture)?
Edit
I edited the focus of my question a little since there was a bug which has been fixed by now by the author of the code.
It seems that it's acutally working in the way described in the question. I just needed to completely shutdown tensorboard and restart tensorboard for every new log file that I created.