Search code examples
kerasautoencodermaximizeloss

Minimizing and maximizing the loss


I would like to train an autoencoder in such a way that the reconstruction error will be low on some observations, and high on the others.

from keras.model import Sequential
from keras.layers import Dense
import keras.backend as K

def l1Loss(y_true, y_pred):
    return K.mean(K.abs(y_true - y_pred))

model = Sequential()
model.add(Dense(5, input_dim=10, activation='relu'))
model.add(Dense(10, activation='sigmoid'))
model.compile(optimizer='adam', loss=l1Loss)

for i in range(1000):
    model.train_on_batch(x_good, x_good) # minimize on low
    model.train_on_batch(x_bad, x_bad, ???) # need to maximize this part, so that mse(x_bad, x_bad_reconstructed is high)

I saw something about replacing ??? with sample_weight=-np.ones(batch_size), but I have no idea if this is fitting for my goal.


Solution

  • If you set sample weight to negative numbers, then minimizing it would in fact lead to maximization of its absolute value.