I have to train a GAN network with Generator and Discriminator. My Generator Network is as below.
def Generator(image_shape=(512,512,3):
inputs = Input(image_shape)
# 5 convolution Layers
# 5 Deconvolution Layers along with concatenation
# output shape is (512,512,3)
model=Model(inputs=inputs,outputs=outputs, name='Generator')
return model, output
My Discriminator Network is as below. The first step in Discriminator network is that I have to concatenate the input of discriminator with output of Generator.
def Discriminator(Generator_output, image_shape=(512,512,3)):
inputs=Input(image_shape)
concatenated_input=concatenate([Generator_output, inputs], axis=-1)
# Now start applying Convolution Layers on concatenated_input
# Deconvolution Layers
return Model(inputs=inputs,outputs=outputs, name='Discriminator')
Initiating the Architectures
G, Generator_output=Generator(image_shape=(512,512,3))
G.summary
D=Discriminator(Generator_output, image_shape=(512,512,3))
D.summary()
My Problem is when I pass concatenated_input
to convolution
layers it gets me the following error.
Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 512, 512, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
If I remove the concatenation layer it works perfectly but why it's not working after concatenation layer although the shape of inputs and Generator_output in concatenation is also same i.e. (512,512,3)
.
The key insight that will help you here is that Models are just like layers in Keras but self contained. So to connect one model output to another, you need to say the second model receieves an input of matching shape rather than directly passing that tensor:
def Discriminator(gen_output_shape, image_shape=(512,512,3)):
inputs=Input(image_shape)
gen_output=Input(gen_output_shape)
concatenated_input=concatenate([gen_output, inputs], axis=-1)
# Now start applying Convolution Layers on concatenated_input
# Deconvolution Layers
return Model(inputs=[inputs, gen_output],outputs=outputs, name='Discriminator')
And then you can use it like a layer:
G=Generator(image_shape=(512,512,3))
D=Discriminator((512,512,3), image_shape=(512,512,3))
some_other_image_input = Input((512,512,3))
discriminator_output = D(some_other_image_input, G) # model is used like a layer
# so the output of G is connected to the input of D
D.summary()
gan = Model(inputs=[all,your,inputs], outputs=[outputs,for,training])
# you can still use G and D like separate models, save them, train them etc
To train them together you can create another Model that has all the required inputs, calls the generator / discriminator. Think of using a lock and key idea, every model has some inputs and you can use them like layers in another Model so long you provide the correct inputs.