Via Keras I create a functional API model based on VGG16 and some custom layers:
input_layer = layers.Input(shape=(150, 150, 3), name="model_input")
conv_base = VGG16(weights="imagenet", include_top=False, input_tensor=input_layer)
cust_model = conv_base(input_layer)
cust_model = layers.Flatten()(cust_model)
cust_model = layers.Dense(256, activation="relu")(cust_model)
cust_model = layers.Dense(1, activation="sigmoid")(cust_model)
final_model = models.Model(input=input_layer, output=cust_model)
... # model training etc. (works fine)
final_model.save("models/custom_vgg16.h5")
In another script I want to load that model and create another custom model:
model_vgg16 = load_model("models/custom_vgg16.h5")
layer_input = model_vgg16.get_layer("model_input").input
layer_outputs = [layer.output for layer in model_vgg16.get_layer("vgg16").layers[1:]]
activation_model = models.Model(inputs=layer_input, outputs=layer_outputs)
But the last line leads to the following error:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("model_input_1:0", shape=(?, 150, 150, 3), dtype=float32) at layer "model_input". The following previous layers were accessed without issue: []
I have found some related issues here on SO and other sites, but none of these seem to be exactly the problem I am facing here. Do you have any ideas?
PS: The contents of layer_outputs
is this:
Tensor("block1_conv1/Relu:0", shape=(?, 150, 150, 64), dtype=float32)
Tensor("block1_conv2/Relu:0", shape=(?, 150, 150, 64), dtype=float32)
Tensor("block1_pool/MaxPool:0", shape=(?, 75, 75, 64), dtype=float32)
Tensor("block2_conv1/Relu:0", shape=(?, 75, 75, 128), dtype=float32)
Tensor("block2_conv2/Relu:0", shape=(?, 75, 75, 128), dtype=float32)
Tensor("block2_pool/MaxPool:0", shape=(?, 37, 37, 128), dtype=float32)
Tensor("block3_conv1/Relu:0", shape=(?, 37, 37, 256), dtype=float32)
Tensor("block3_conv2/Relu:0", shape=(?, 37, 37, 256), dtype=float32)
Tensor("block3_conv3/Relu:0", shape=(?, 37, 37, 256), dtype=float32)
Tensor("block3_pool/MaxPool:0", shape=(?, 18, 18, 256), dtype=float32)
Tensor("block4_conv1/Relu:0", shape=(?, 18, 18, 512), dtype=float32)
Tensor("block4_conv2/Relu:0", shape=(?, 18, 18, 512), dtype=float32)
Tensor("block4_conv3/Relu:0", shape=(?, 18, 18, 512), dtype=float32)
Tensor("block4_pool/MaxPool:0", shape=(?, 9, 9, 512), dtype=float32)
Tensor("block5_conv1/Relu:0", shape=(?, 9, 9, 512), dtype=float32)
Tensor("block5_conv2/Relu:0", shape=(?, 9, 9, 512), dtype=float32)
Tensor("block5_conv3/Relu:0", shape=(?, 9, 9, 512), dtype=float32)
Tensor("block5_pool/MaxPool:0", shape=(?, 4, 4, 512), dtype=float32)
I found the solution. Had to use the input layer of the vgg16
layer/model directly.
So for the records:
layer_input = model_vgg16.get_layer("vgg16").get_layer("model_input").input