I would like to combine two pretrained models(DenseNet169 and InceptionV3) or it could be any two. Followed the steps from the following link, but did not work. Did try both concatenate and Concatenate, still getting error. I might have made some mistakes somewhere. This is my first stackoverflow question and help would be greatly appreciated. https://datascience.stackexchange.com/questions/39407/how-to-make-two-parallel-convolutional-neural-networks-in-keras First case: I tried with NO pooling
model1 = DenseNet169(weights='imagenet', include_top=False, input_shape=(300,300,3))
out1 = model1.output
model2 = InceptionV3(weights='imagenet', include_top=False, input_shape=(300,300,3))
out2 = model2.output
from keras.layers import concatenate
from keras.layers import Concatenate
x = concatenate([out1, out2]) # merge the outputs of the two models
out = Dense(10, activation='softmax')(x) # final layer of the network
I got this error:
ValueError: A Concatenate
layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 9, 9, 1664), (None, 8, 8, 2048)]
Second case: tried with average pooling, able to concatenate but got error in training process
model1 = DenseNet169(weights='imagenet', include_top=False, pooling='avg', input_shape=(300,300,3))
out1 = model1.output
model2 = InceptionV3(weights='imagenet', include_top=False, pooling='avg', input_shape=(300,300,3))
out2 = model2.output
x = concatenate([out1, out2]) # merge the outputs of the two models
out = Dense(10, activation='softmax')(x) # final layer of the network
model = Model(inputs=[model1.input, model2.input], outputs=[out])
model.compile(optimizer=Adam(), loss='categorical_crossentropy',metrics=['accuracy'])
history = model.fit_generator(generator=data_generator_train,
validation_data=data_generator_val,
epochs=20,
verbose=1
)
Error in second case: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.17074525, 0.10469133, 0.08226486], [0.19852941, 0.13124999, 0.11642157], [0.36528033, 0.3213197 , 0.3085095 ], ..., [0.19082414, 0.17801011, 0.15840226...
Second Case: Since your model expects two inputs, your data_generator_train
and data_generator_val
should return/yield a list of two inputs for corresponding models and output. You can achieve that by updating the return value of __data_generation
method
def __data_generation(...):
...
# consider X as input image and y as the label of your model
return [X, X], keras.utils.to_categorical(y, num_classes=self.n_classes)
First Case: Since the spatial size of the output of model2 (8x8)
is dissimilar (smaller) to model1 output (9x9)
, you can apply zero padding on model2 output before concatenation.
out1 = model1.output
out2 = model2.output
out2 = ZeroPadding2D(((0,1), (0,1)))(out2)
x = concatenate([out1, out2])
For first case too you need to modify your data generator like second case.