Search code examples
pythondeep-learningconcatenation

Python CNN program: the list of Numpy arrays that you are passing to your model is not the size the model expected


I have the following code where I am trying to combine a set of 23 3D images of size (96, 96, 96) with its corresponding test value (shown as input_tmtA) of (for example) 50. In other words, I would like one image to have a test value of 50 while another to have a test value of 80. I am able to run my images through the CNN without the 2nd input, but when I try to concatenate the 2nd input, it seems that the model cannot find an array for the 2nd input.

I have tried changing input values and changing Model.inputs. It may not seem like much, but I am just utterly confused in what could be the issue and could not think of any other methods or possible errors to try. I am unsure how to add in my 2nd array into the CNN along with the 1st array. The error I received was Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays where the single array are from the images. Error points to the line "model.fit". Thank you

tmtA = np.array([50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80])
batch_size = 3

input_image = Input((x_train.shape[1]*x_train.shape[2]*x_train.shape[3], 1))

inputs = Input((x_train.shape[1], x_train.shape[2], x_train.shape[3], 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)

conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)

conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)

conv4 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop3)
conv4 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv4)
drop4 = Dropout(0.5)(pool4)

conv5 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(drop4)
conv5 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv5)
pool5 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv5)
drop5 = Dropout(0.5)(pool5)

flat1 = Flatten()(drop5)
dense1 = Dense(128, activation='relu')(flat1)
dense2 = Dense(64, activation='relu')(dense1)
dense3 = Dense(32, activation='relu')(dense2)
drop6 = Dropout(0.5)(dense3)
dense4 = Dense(num_classes, activation='softmax')(drop6)

input_tmtA = Input((len(tmtA), 1))
dense_tmtA1 = Dense(1, activation='softmax')(input_tmtA)
combine1 = concatenate([input_image, input_tmtA], axis=1)

model = Model(inputs=[input_image, input_tmtA], outputs=[combine1])

opt = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.02, amsgrad=False)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=15, shuffle=True)
score = model.evaluate(x_test, y_test, batch_size=batch_size)
print(score)

Solution

  • I solved it using the following format:

    model_image = Model(inputs=inputs, outputs=dense4)
    
    
    # tmtA model
    input_tmtA = Input((1, 1))
    flat_tmtA1 = Flatten()(input_tmtA)
    dense_tmtA1 = Dense(num_subjects, activation='relu')(flat_tmtA1)
    dense_tmtA2 = Dense(num_classes, activation='softmax')(dense_tmtA1)
    
    model_tmtA = Model(inputs=input_tmtA, outputs=dense_tmtA1)
    
    combine1 = concatenate([model_image.output, model_tmtA.output])
    dense_combine1 = Dense(num_subjects, activation='relu')(combine1)
    dense_combine2 = Dense(num_classes, activation='softmax')(dense_combine1)
    
    
    # final model
    model = Model(inputs=[model_image.input, model_tmtA.input], outputs=[dense_combine2])
    
    opt = optimizers.Adam(lr=1e-6)
    model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
    model.fit([x_train, x_train_tmtA], y_train, batch_size=batch_size, epochs=15, shuffle=True)
    score = model.evaluate([x_test, x_test_tmtA], y_test, batch_size=batch_size)
    print(score)