Search code examples
pythontensorflowtensorboard

Unconnected series in Tensorflow graph


My sparse autoencoder model consisted of mainly 10 convolutions and 10 transpose convolution layers. After completion of training, I am getting the graph in the Tensorboard like below.

Tensorboard graph

My understanding says that this graph is not connected because Conv1 and Conv2 are unconnected. This is my first Tensorflow model, so I am confused. Please suggest what is the wrong I am doing. This code has been developed based on CIFAR10 multi GPU code.

Model Snippet

def inference(images, labels, keep_prob, batch_size):
  """Build the cnn model.
  Args:
    images: Images returned from distorted_inputs() or inputs().
    keep_prob: Dropout probability
  Returns:
    Logits.
  """

# conv1
  with tf.variable_scope('conv1') as scope:
    kernel1 = _variable_with_weight_decay('weights', shape=[5, 5, model_params.org_image['channels'], 100], stddev=1e-4, wd=0.0)
    conv1 = tf.nn.conv2d(images, kernel1, [1, 1, 1, 1], padding='SAME')
    biases1 = _variable_on_cpu('biases', [100], tf.constant_initializer(0.0))
    bias1 = tf.nn.bias_add(conv1, biases1)
    conv1 = tf.nn.relu(bias1, name=scope.name)
    print(tf.abs(conv1))
    _activation_summary(conv1)

  # norm1
  norm1 = tf.nn.batch_normalization(conv1, mean=0.6151888371, variance=0.2506813109, offset=None, scale=False, variance_epsilon=0.001, name='norm1') 

  # conv2
  with tf.variable_scope('conv2') as scope:
    kernel2 = _variable_with_weight_decay('weights', shape=[5, 5, 100, 120], stddev=1e-4, wd=0.0)
    conv2 = tf.nn.conv2d(norm1, kernel2, [1, 1, 1, 1], padding='SAME')
    biases2 = _variable_on_cpu('biases', [120], tf.constant_initializer(0.1))
    bias2 = tf.nn.bias_add(conv2, biases2)
    conv2 = tf.nn.relu(bias2, name=scope.name)
    print(tf.abs(conv2))
    _activation_summary(conv2)

  # norm2
  norm2 = tf.nn.batch_normalization(conv2, mean=0.6151888371, variance=0.2506813109, offset=None, scale=False, variance_epsilon=0.001, name='norm2')
  # pool2

....

Even I am not understanding why "IsVariable" is showing in my graph. Any type of help will be highly appreciated.

Update

I found this solution which says that "multi-GPU graph looks like that is because the namescoping in the multi-GPU version creates tower_N namespaces that have incoming edges (tensors) above a certain threshold, at which point we extract those nodes on the side since usually they end up being auxiliary and not part of the main net architecture." Still, I am confused if my graph is perfect or not.


Solution

  • I ran the original CIFAR10 multi GPU code and check the CIFAR10 tensorboard outcome which is similar to my graph. So my conclusion is my graph is fine.

    CIFAR10 tensorboard outcome