I am trying to use transfer learning in Keras. I have trained a model for a different task, but now I want to use it for a similar task, but the input and output shapes are different.
I loaded the trained model using load_model
. My original model is:
model = Sequential()
model.add(Conv2D(32, (5,5), input_shape=(28,28,1), padding='same', activation='relu'))
model.add(Conv2D(32, (5,5), padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(MaxPool2D(padding='same', strides=2))
model.add(Conv2D(128, (5, 5), padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(MaxPool2D(padding='same', strides=2))
model.add(Conv2D(64, (4,4), padding='same', activation='relu'))
model.add(Conv2D(64, (4,4), padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(MaxPool2D(padding='same', strides=2))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(26, activation='softmax'))
rmsdrp = optimizers.rmsprop(lr=0.001, epsilon=1e-08)
model.compile( loss = "categorical_crossentropy",
optimizer = rmsdrp,
metrics=['accuracy']
)
Then, for output I did the following:
model.pop()
model.add(Dense(3*168,activation='softmax'))
model.add(Reshape((3,168)))
This is working. For input, I did this:
model.layers[0] = Input(shape=(137,236))
But when I printout the model summary, it still gives the previous input shape of the model. What am I doing wrong? How else should I change the input shape? This is the model summary in the end:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 28, 28, 32) 832
_________________________________________________________________
conv2d_2 (Conv2D) (None, 28, 28, 32) 25632
_________________________________________________________________
batch_normalization_1 (Batch (None, 28, 28, 32) 128
_________________________________________________________________
dropout_1 (Dropout) (None, 28, 28, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 14, 14, 128) 102528
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 128) 512
_________________________________________________________________
dropout_2 (Dropout) (None, 14, 14, 128) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 128) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 7, 7, 64) 131136
_________________________________________________________________
conv2d_5 (Conv2D) (None, 7, 7, 64) 65600
_________________________________________________________________
batch_normalization_3 (Batch (None, 7, 7, 64) 256
_________________________________________________________________
dropout_3 (Dropout) (None, 7, 7, 64) 0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 262400
_________________________________________________________________
dropout_4 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 504) 129528
_________________________________________________________________
reshape_1 (Reshape) (None, 3, 168) 0
=================================================================
Total params: 595,706
Trainable params: 595,258
Non-trainable params: 448
_________________________________________________________________
It looks like the problem is with the use of Input(shape=(137,236))
which is usually used for the functional model, not the sequential. You can change the input layer by essentially changing the model:
input = Input(shape=(137,236))
x = model.layers[1](input) #assuming you are ignoring the first conv layer as implied in your code
for layer in model.layers[2:]:
x = layer(x)
model = Model(inputs=input, outputs=x)
model.compile(*args, **kwargs)
To make sure the weights do not get, you need to add a for loop to set them to un-trainable.
for layer in model.layers[1:-2]:
layer.trainable=False