Search code examples
pythonkerasdeep-learningtransfer-learning

VGG16 Transfer Learning varying output


Observed strange behavior when using VGG16 for transfer learning.

model = VGG16(weights='imagenet',include_top=True)
model.layers.pop()
model.layers.pop()

for layer in model.layers:
    layer.trainable=False

new_layer = Dense(2,activation='softmax')
inp = model.input
out = new_layer(model.layers[-1].output)

model = Model(inp,out)

However, when model.predict(image) is used, the output is varying in terms of classification,i.e., sometime it classifies image as Class 1 and next time the same image is classified as Class 2.


Solution

  • It is because you didn't set seed. Try this

    import numpy as np
    seed_value = 0
    np.random.seed(seed_value)
    
    model = VGG16(weights='imagenet',include_top=True)
    model.layers.pop()
    model.layers.pop()
    
    for layer in model.layers:
        layer.trainable=False
    
    new_layer = Dense(2, activation='softmax',
                      kernel_initializer=keras.initializers.glorot_normal(seed=seed_value),
                      bias_initializer=keras.initializers.Zeros())
    inp = model.input
    out = new_layer(model.layers[-1].output)
    
    model = Model(inp,out)