So I am trying to implement a model based on the codes found here:
All is well until I get to this point:
for l1, l2 in zip(full_model.layers[:19], autoencoder.layers[0:19]):
l1.set_weights(l2.get_weights())
And I get this error, even though the full model has the same weights as the autoencoder:
ValueError: You called `set_weights(weights)` on layer "flatten_10" with a weight list of length 2, but the layer was expecting 0 weights. Provided weights: [array([[[[-3.37540284e-02, 7.36398697e-02, -1.45...
The weights for the autoencoder and the full model are as follows:
autoencoder.get_weights()[0][1]
array([[[-0.07640466, 0.08604569, 0.10095344, 0.08002567,
-0.05795965, 0.12277777, 0.04575707, 0.1368851 ,
0.08104218, 0.04106109, 0.01466343, -0.00301184,
-0.02941842, -0.06449406, -0.11245678, 0.03759771,
-0.04456315, 0.05147151, -0.05671669, 0.03154052,
0.08367646, -0.03407011, 0.03081554, -0.07092344,
-0.0342903 , -0.12681712, -0.11921115, -0.00943625,
0.07913507, -0.11182833, 0.06839333, -0.10381861]],
[[ 0.12951587, -0.10705423, 0.14214374, 0.10236198,
0.04869333, -0.07741497, 0.04825569, 0.140887 ,
-0.04529881, 0.10183885, 0.09898531, 0.0463811 ,
-0.0497799 , -0.03215659, -0.1106519 , 0.0191465 ,
-0.03108089, 0.11891119, 0.13607842, -0.06900101,
0.02550365, -0.07291926, 0.0408677 , -0.13281997,
-0.10269159, 0.12453358, -0.06403439, 0.03591786,
0.09293085, 0.04930058, -0.07233981, -0.11631108]],
[[ 0.09462225, -0.13031363, -0.07633019, 0.07383946,
-0.08967619, 0.03298028, 0.05059863, -0.07996925,
-0.0285711 , -0.02666069, -0.02046945, -0.02898544,
-0.0632349 , 0.01124811, -0.06102825, -0.02444353,
-0.02901937, 0.07315389, 0.04660689, -0.03481405,
0.03801505, -0.02921393, 0.03578328, 0.00787276,
-0.13757674, -0.01068925, -0.10495549, -0.04071948,
-0.01119018, 0.02144167, 0.09804168, -0.05260663]]],
dtype=float32)
full_model.get_weights()[0][1]
array([[[-0.07640466, 0.08604569, 0.10095344, 0.08002567,
-0.05795965, 0.12277777, 0.04575707, 0.1368851 ,
0.08104218, 0.04106109, 0.01466343, -0.00301184,
-0.02941842, -0.06449406, -0.11245678, 0.03759771,
-0.04456315, 0.05147151, -0.05671669, 0.03154052,
0.08367646, -0.03407011, 0.03081554, -0.07092344,
-0.0342903 , -0.12681712, -0.11921115, -0.00943625,
0.07913507, -0.11182833, 0.06839333, -0.10381861]],
[[ 0.12951587, -0.10705423, 0.14214374, 0.10236198,
0.04869333, -0.07741497, 0.04825569, 0.140887 ,
-0.04529881, 0.10183885, 0.09898531, 0.0463811 ,
-0.0497799 , -0.03215659, -0.1106519 , 0.0191465 ,
-0.03108089, 0.11891119, 0.13607842, -0.06900101,
0.02550365, -0.07291926, 0.0408677 , -0.13281997,
-0.10269159, 0.12453358, -0.06403439, 0.03591786,
0.09293085, 0.04930058, -0.07233981, -0.11631108]],
[[ 0.09462225, -0.13031363, -0.07633019, 0.07383946,
-0.08967619, 0.03298028, 0.05059863, -0.07996925,
-0.0285711 , -0.02666069, -0.02046945, -0.02898544,
-0.0632349 , 0.01124811, -0.06102825, -0.02444353,
-0.02901937, 0.07315389, 0.04660689, -0.03481405,
0.03801505, -0.02921393, 0.03578328, 0.00787276,
-0.13757674, -0.01068925, -0.10495549, -0.04071948,
-0.01119018, 0.02144167, 0.09804168, -0.05260663]]],
dtype=float32)
I am trying to tune it according to my dataset and the changes I made with the code are:
#creating the encoder function
def encoder(input_img):
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding ='same')(x)
x = MaxPooling2D((2, 2))(x)
return x
#creating the decoder function
def decoder(x):
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2,2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2,2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
return decoded
and
autoencoder_train = autoencoder.fit(X_train, X_train,
batch_size = 256,
epochs = 10,
verbose = 1,
validation_data = (X_val, X_val))
where, instead of training the dataset with its labels, I trained it using the same training set. How can I make the full model accept the weights of the autoencoder? I tried training the autoencoder with the input labels, but it does not learn the input.
Nevermind, I know where the problem is. I just needed to change the range of layers in this line,
for l1, l2 in zip(full_model.layers[:19], autoencoder.layers[0:19]):
l1.set_weights(l2.get_weights())
according to how many layers my encoder has. In this case, it should be full_model.layers[0:5], instead of full_model.layers[0:19]