I have implemented a variational autoencoder with CNN layers for the encoder and decoder. The code is shown below. My training data (train_X
) consists of 40'000 images with size 64 x 78 x 1 and my validation data (valid_X
) consists of 4500 images of size 64 x 78 x 1.
When I use square images (e.g. 64 x 64) everything works well but when I use the above mentioned images (64 x 78) I'm getting the following error:
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training.py", line 1039, in fit
validation_steps=validation_steps)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training_arrays.py", line 199, in fit_loop
outs = f(ins_batch)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [655360] vs. [638976]
[[{{node training/Adam/gradients/loss/decoder_loss/sub_grad/BroadcastGradientArgs}}]]
What do I have to change in my code so that it also works with non quadratic images? I think the problem is in the decoder part.
import keras
from keras import backend as K
from keras.layers import (Dense, Input, Flatten)
from keras.layers import Lambda, Conv2D
from keras.models import Model
from keras.layers import Reshape, Conv2DTranspose
from keras.losses import mse
def sampling(args):
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
inner_dim = 16
latent_dim = 6
image_size = (64,78,1)
inputs = Input(shape=image_size, name='encoder_input')
x = inputs
x = Conv2D(32, 3, strides=2, activation='relu', padding='same')(x)
x = Conv2D(64, 3, strides=2, activation='relu', padding='same')(x)
# shape info needed to build decoder model
shape = K.int_shape(x)
# generate latent vector Q(z|X)
x = Flatten()(x)
x = Dense(inner_dim, activation='relu')(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(inner_dim, activation='relu')(latent_inputs)
x = Dense(shape[1] * shape[2] * shape[3], activation='relu')(x)
x = Reshape((shape[1], shape[2], shape[3]))(x)
x = Conv2DTranspose(64, 3, strides=2, activation='relu', padding='same')(x)
x = Conv2DTranspose(32, 3, strides=2, activation='relu', padding='same')(x)
outputs = Conv2DTranspose(filters=1, kernel_size=3, activation='sigmoid', padding='same', name='decoder_output')(x)
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae')
def vae_loss(x, x_decoded_mean):
reconstruction_loss = mse(K.flatten(x), K.flatten(x_decoded_mean))
reconstruction_loss *= image_size[0] * image_size[1]
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
return vae_loss
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.000)
vae.compile(loss=vae_loss, optimizer=optimizer)
vae.fit(train_X, train_X,
epochs=500,
batch_size=128,
verbose=1,
shuffle=True,
validation_data=(valid_X, valid_X))
What do I have to change in my code so that it also works with non quadratic images? I think the problem is in the decoder part.
Yes, there is a mismatch in the decoder output size and the y
fed to the fit() method. With the input dimension changed to 64 x 78 x 1, the decoder output size is (64 x 80 x 1) while the y
fed to fit() method is still 64 x 78 x 1 (i.e. shape of train_X ignoring batch dimension). So, when decoder loss is calculated, y_true is 64 x 78 x 1 while y_pred(decoder output) is 64 x 80 x 1 causing the error.
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [655360] vs. [638976]
655360 / ( 64 * 80 ) = 128 (batch size)
638976 / 128 = 4992 = 64 * 78
One way to fix the problem is to feed input_size as (64 x 80 x 1) if that is acceptable.