Search code examples
autoencodertf.keras

unknow Error in Making AutoEncoder using Keras


from tensorflow.keras import metrics
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Reshape, Input, Dense,Flatten, Reshape
import numpy as np

↑ import packages

from keras.datasets import mnist

(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape(60000,28,28,-1)
x_test = x_test.reshape(10000,28,28,-1)

↑ loading data, mnist.

x_train = x_train.astype('float32') / 255.
x_train = x_train[:,:,:,]
x_test = x_test.astype('float32') / 255.
x_test = x_train
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))  
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))  

input_img = Input(shape=(28, 28, 1))  

↑ handling data and make input layer.

# encoder
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
x = Dense(10, activation='relu')(x)
encoded = Dense(1, activation='softmax')(x)

encoder = Model(input_img, encoded, name = "encoder")

↑ encoder part. I'm trying to compress mnist image to 1 value.

# decoder
decoder_input= Input((1))
decoder = Dense(64, activation='relu')(decoder_input)
x=  Dense(64, activation='relu')(decoder)
x=  Dense(98, activation='relu')(x)
x=  Dense(196, activation='relu')(x)
x=  Dense(392, activation='relu')(x)
x=  Dense(784, activation='relu')(x)
decoded =  Reshape([28,28,1])(x)

decoder = Model(decoder_input, decoded, name='decoder')

↑ and decoder part. making mnist image from a value.

auto_input = Input(shape=(28,28,1))
encoded = encoder(auto_input)
decoded = decoder(encoded)

auto_encoder = Model(auto_input, decoded)
auto_encoder.compile(optimizer='adam', loss='binary_crossentropy')

↑ connect encoder&decoder.

auto_encoder.fit(
    x_train, 
    x_train,
    epochs=64,
    batch_size=128,
    shuffle=True,
    validation_data=(x_test, x_test)              
) 

↑ and trying to learn my AutoEncoder but it fails.

error message is below.

UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

I searched lots of time in google, but I still cannot get clue. I made right shape of data, right shape of outputs, but error shows.

what is the Cause of the problem?


Solution

  • RTX 2070 GPUs require memory growth to be set to True in recent versions of CUDA and CuDNN.

    Add these lines to the top of the file you run:

    import tensorflow as tf
    physical_devices = tf.config.experimental.list_physical_devices('GPU')
    config = tf.config.experimental.set_memory_growth(physical_devices[0], True)