Search code examples
pythontensorflowkerasdeep-learningvgg-net

VGG, perceptual loss in keras


I'm wondering if it's possible to add a custom model to a loss function in keras. For example:

def model_loss(y_true, y_pred):
    inp = Input(shape=(128, 128, 1))
    x = Dense(2)(inp)
    x = Flatten()(x)

    model = Model(inputs=[inp], outputs=[x])
    a = model(y_pred)
    b = model(y_true)

    # calculate MSE
    mse = K.mean(K.square(a - b))
    return mse

This is a simplified example. I'll actually be using a VGG net in the loss, so just trying to understand the mechanics of keras.


Solution

  • The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False before compiling.

    Then you recalculate your Y_train.

    Suppose you have these models:

    mainModel - the one you want to apply a loss function    
    lossModel - the one that is part of the loss function you want   
    

    Create a new model appending one to another:

    from keras.models import Model
    
    lossOut = lossModel(mainModel.output) #you pass the output of one model to the other
    
    fullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph. 
    

    This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.

    Make sure lossModel is not trainable before compiling:

    lossModel.trainable = False
    for l in lossModel.layers:
        l.trainable = False
    
    fullModel.compile(loss='mse',optimizer=....)
    

    Now adjust your data for training:

    fullYTrain = lossModel.predict(originalYTrain)
    

    And finally do the training:

    fullModel.fit(xTrain, fullYTrain, ....)