I am tying to set up a simple convolutional autoencoder:
input (InputLayer) (None, 64, 64, 1) 0
encoder_conv_1 (Conv2D) (None, 64, 64, 32) 320
max_pooling2d_1 (MaxPooling2 (None, 32, 32, 32) 0
decoder_conv_1 (Conv2D) (None, 30, 30, 32) 9248
up_sampling2d_1 (UpSampling2 (None, 60, 60, 32) 0
Why isn't my last layer going back to 64, 64 ,1? Or rather why is the decoder_conv_1 layer going to 30, 30 ,32 ?
you miss padding same. try in this way...
inp = Input((64,64,1))
c = Conv2D(32, 3, padding='same')(inp)
c = MaxPool2D()(c)
c = Conv2D(32, 3, padding='same')(c) # <=== padding same
c = UpSampling2D()(c)
out = Conv2D(1, 3, padding='same')(c)
m = Model(inp, out)
m.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 64, 64, 1)] 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 64, 64, 32) 320
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 32, 32, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 32, 32, 32) 9248
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 64, 64, 32) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 64, 64, 1) 289
=================================================================