I want to do transfer learning with simple MLP models. First I train a 1 hidden layer feed forward network on large data:
net = Sequential()
net.add(Dense(500, input_dim=2048, kernel_initializer='normal', activation='relu'))
net.add(Dense(1, kernel_initializer='normal'))
net.compile(loss='mean_absolute_error', optimizer='adam')
net.fit(x_transf,
y_transf,
epochs=1000,
batch_size=8,
verbose=0)
Then I want to pass the unique hidden layer as input to a new network, in which I want to add a second layer. The re-used layer should not be trainable.
idx = 1 # index of desired layer
input_shape = net.layers[idx].get_input_shape_at(0) # get the input shape of desired layer
input_layer = net.layers[idx]
input_layer.trainable = False
transf_model = Sequential()
transf_model.add(input_layer)
transf_model.add(Dense(input_shape[1], activation='relu'))
transf_model.compile(loss='mean_absolute_error', optimizer='adam')
transf_model.fit(x,
y,
epochs=10,
batch_size=8,
verbose=0)
EDIT: The above code returns:
ValueError: Error when checking target: expected dense_9 to have shape (None, 500) but got array with shape (436, 1)
What's the trick to make this work?
I would simply use Functional API to build such a model:
shared_layer = net.layers[0] # you want the first layer, so index = 0
shared_layer.trainable = False
inp = Input(the_shape_of_one_input_sample) # e.g. (2048,)
x = shared_layer(inp)
x = Dense(800, ...)(x)
out = Dense(1, ...)(x)
model = Model(inp, out)
# the rest is the same...