Search code examples
conv-neural-networktensorflow2.0keras-layerdeconvolution

upsampling convolution has no parameters


I have read many papers where convolutional neuronal networks are used for super-resolution or for image segmentation or autoencoder and so on. They use different kinds of upsampling aka deconvolutions and a discussion over here in a different question. Here in Tensorflow there is a function Here in Keras there are some

I implemented the Keras one:

 x = tf.keras.layers.UpSampling1D(size=2)(x)

and I used this one stolen from an super-resolution repo here:

class SubPixel1D(tf.keras.layers.Layer):
  def __init__(self, r):
      super(SubPixel1D, self).__init__()
      self.r = r

  def call(self, inputs):
      with tf.name_scope('subpixel'):
          X = tf.transpose(inputs, [2,1,0]) # (r, w, b)
          X = tf.compat.v1.batch_to_space_nd(X, [self.r], [[0,0]]) # (1, r*w, b)
          X = tf.transpose(X, [2,1,0])
      return X

But I realized that both don't have parameters in my model summary. Is this not necessary for those functions to have parameters so they can learn the upsampling??


Solution

  • In Keras Upsampling simply copies your input to the size provided. you can find the documentation here, So there is no need to have parameters for these layers.

    I think you have confused upsampling with Transposed Convolution/ Deconvolution.