Search code examples
tensorflowdeep-learningtf-slimbatch-normalizationtensorflow-slim

How to add slim.batch_norm after custom ops/layer in tensorflow?


I have a function that aims to build an architecture as

Input(x) -> o=My_ops(x,128) -> o=slim.batch_norm(o)

So, my function is

def _build_block(self, x, name, is_training=True):
  with tf.variable_scope(name) as scope:
    o = my_ops(x, 256)
    batch_norm_params = {
      'decay': 0.9997,
      'epsilon': 1e-5,
      'scale': True,
      'updates_collections': tf.GraphKeys.UPDATE_OPS,
      'fused': None,  # Use fused batch norm if possible.
      'is_training': is_training
    }
    with slim.arg_scope([slim.batch_norm], **batch_norm_params) as bn:
      return slim.batch_norm(o)

Am I right? Can I set is_training as in a function above? If not, could you fix to help me?


Solution

  • Your function is ok. And yes, you can set is_training into slim.batch_norm like that.

    But your code looks unnecessarily complicated to me. Here's an equivalent version:

    def _build_block(self, x, name, is_training=True):
      with tf.variable_scope(name):
        o = my_ops(x, 256)
        return slim.batch_norm(o, decay=0.9997, epsilon=1e-5, scale=True, is_training=is_training)
    

    Note that I dropped arg_scope (because it's main use-case is to repeat the same arguments to multiple layers, you have only one), omitted updates_collections=tf.GraphKeys.UPDATE_OPS and fused=None (because these are the default values), dropped as scope (because it's unused).