I'm trying to convert a network that I'm using from using tf-slim's conv2d to using tf.layers.conv2d, since it looks like tf.layers is the more supported and future-proof option. The function signatures are fairly similar, but is there something algorithmically different between the two? I'm getting different output tensor dimensions than expected.
x = tf.layers.conv2d(inputs=x,
filters=256,
kernel_size=[3,3],
trainable=True)
As opposed to this:
x = slim.conv2d(x, 256, 3)
I'm getting different output tensor dimensions than expected.
This is due to the fact that, by default, slim.conv2d uses same padding whereas tf.layers.conv2d uses valid padding.
If you want to reproduce the exact same behavior, here is the correct implementation:
x = tf.layers.conv2d(x, 256, 3, padding='same')