Search code examples
keraslstmmasking

How to mask the inputs in an LSTM autoencoder having a RepeatVector() layer?


I have been trying to obtaining a vector representation of a sequence of vectors using an LSTM autoencoder so that I can classify the sequence using a SVM or other such supervised algorithms. The amount of data is preventing me from using a fully connected dense layer for classification.

The shortest size of my input is 7 timesteps and the longest sequence is 356 timesteps. Accordingly, I have padded the shorter sequences with zeros to obtain a final x_train of shape (1326, 356, 8) where 1326 is the number of training samples and 8 is the dimension of one timestep. I am trying to encode these sequences into a single vector using the given LSTM autoencoder.

model.add(Masking(mask_value=0.0, input_shape=(max_len, 8)))
model.add(LSTM(100, activation='relu'))
model.add(RepeatVector(max_len))
model.add(LSTM(8, activation='relu', return_sequences=True))
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, x_train, batch_size=32, callbacks=[chk], epochs=1000, validation_split=0.05, shuffle=True)

I am trying to mask the zero padded results but the RepeatVector() layer may be hindering the process. Hence, after sometime the mean square error loss is becoming nan. Can anyone help me out as to how I can only include the relevant timestep in calculating the loss function and ignore the other timesteps?


Solution

  • Each layer in Keras has an input_mask and output_mask, the mask was already lost right after the first LSTM layer (when return_sequence = False) in your example. Let me explain this in following example and show 2 solutions to achieve masking in LSTM-autoencoder.

    time_steps = 3
    n_features = 2
    input_layer = tfkl.Input(shape=(time_steps, n_features))
    # I want to mask the timestep where all the feature values are 1 (usually we pad by 0)
    x = tfk.layers.Masking(mask_value=1)(input_layer)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = tfkl.LSTM(2, return_sequences=False)(x)
    x = tfkl.RepeatVector(time_steps)(x)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = tfk.layers.Dense(n_features)(x)
    lstm_ae = tfk.models.Model(inputs=input_layer, outputs=x)
    lstm_ae.compile(optimizer='adam', loss='mse')
    print(lstm_ae.summary())
    
    Model: "model_2"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_3 (InputLayer)         [(None, 3, 2)]            0         
    _________________________________________________________________
    masking_2 (Masking)          (None, 3, 2)              0         
    _________________________________________________________________
    lstm_8 (LSTM)                (None, 3, 2)              40        
    _________________________________________________________________
    lstm_9 (LSTM)                (None, 2)                 40        
    _________________________________________________________________
    repeat_vector_2 (RepeatVecto (None, 3, 2)              0         
    _________________________________________________________________
    lstm_10 (LSTM)               (None, 3, 2)              40        
    _________________________________________________________________
    lstm_11 (LSTM)               (None, 3, 2)              40        
    _________________________________________________________________
    dense_2 (Dense)              (None, 3, 2)              6         
    =================================================================
    Total params: 166
    Trainable params: 166
    Non-trainable params: 0
    _________________________________________________________________
    
    
    for i, l in enumerate(lstm_ae.layers):
        print(f'layer {i}: {l}')
        print(f'has input mask: {l.input_mask}')
        print(f'has output mask: {l.output_mask}')
    
    layer 0: <tensorflow.python.keras.engine.input_layer.InputLayer object at 0x645b49cf8>
    has input mask: None
    has output mask: None
    layer 1: <tensorflow.python.keras.layers.core.Masking object at 0x645b49c88>
    has input mask: None
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 2: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x645b4d0b8>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 3: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x645b4dba8>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: None
    layer 4: <tensorflow.python.keras.layers.core.RepeatVector object at 0x645db0390>
    has input mask: None
    has output mask: None
    layer 5: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x6470b5da0>
    has input mask: None
    has output mask: None
    layer 6: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x6471410f0>
    has input mask: None
    has output mask: None
    layer 7: <tensorflow.python.keras.layers.core.Dense object at 0x647dfdf60>
    has input mask: None
    has output mask: None
    

    As you can see above, the second LSTM layer (return_sequence=False) returns a None, which makes sense because the timesteps are lost (shape are changed) and the layer doesn't know how to pass the mask, you can also check the source code and you will see that it returns the input_mask if return_sequence=True, otherwise None. Another problem is of course the RepeatVector layer, this layer doesn't support masking explicitly at all, again this is because the shape has changed. Except this bottleneck part (the second LSTM + RepeatVector), other parts of the model are able to pass the mask, so we only have to deal with the bottleneck part.

    Here are 2 possible solutions, I will also validate based on calculating the loss.

    First solution: ignore the timesteps explicitly by passing sample_weight

    # last timestep should be masked because all feature values are 1
    x = np.array([1, 2, 1, 2, 1, 1], dtype='float32').reshape(1, 3, 2)
    print(x)
    array([[[1., 2.],
            [1., 2.],
            [1., 1.]]], dtype=float32)
    
    y = lstm_ae.predict(x)
    print(y)
    array([[[0.00020542, 0.00011909],
            [0.0007361 , 0.00047323],
            [0.00158514, 0.00107504]]], dtype=float32)
    
    # the expected loss should be the sum of square error between the first 2 timesteps
    # (2 features each timestep) divided by 6. you might expect that this should be 
    # divided by 4, but in the source code this is actually divided by 6, which doesn't 
    # matter a lot because only the gradient of loss matter, but not the loss itself.
    
    expected_loss = np.square(x[:, :2, :] - y[:, :2, :]).sum()/6
    print(expected_loss)
    1.665958086649577
    
    actual_loss_with_masking = lstm_ae.evaluate(x=x, y=x)
    print(actual_loss_with_masking)
    1.9984053373336792
    
    # the actual loss still includes the last timestep, which means the masking is not # effectively passed to the output layer for calculating the loss
    print(np.square(x-y).sum()/6)
    1.9984052975972493
    
    
    # if we provide the sample_weight 0 for each timestep that we want to mask, the
    # loss will be ignored correctly
    lstm_ae.compile(optimizer='adam', loss='mse', sample_weight_mode='temporal')
    sample_weight_array = np.array([1, 1, 0]).reshape(1, 3)  # it means to ignore the last timestep
    actual_loss_with_sample_weight = lstm_ae.evaluate(x=x, y=x, sample_weight=sample_weight_array)
    # the actual loss now is correct
    print(actual_loss_with_sample_weight)
    1.665958046913147
    
    

    Second solution: make a customized bottleneck layer to pass the mask manually

    class lstm_bottleneck(tf.keras.layers.Layer):
        def __init__(self, lstm_units, time_steps, **kwargs):
            self.lstm_units = lstm_units
            self.time_steps = time_steps
            self.lstm_layer = tfkl.LSTM(lstm_units, return_sequences=False)
            self.repeat_layer = tfkl.RepeatVector(time_steps)
            super(lstm_bottleneck, self).__init__(**kwargs)
        
        def call(self, inputs):
            # just call the two initialized layers
            return self.repeat_layer(self.lstm_layer(inputs))
        
        def compute_mask(self, inputs, mask=None):
            # return the input_mask directly
            return mask
    
    time_steps = 3
    n_features = 2
    input_layer = tfkl.Input(shape=(time_steps, n_features))
    # I want to mask the timestep where all the feature values are 1 (usually we pad by 0)
    x = tfk.layers.Masking(mask_value=1)(input_layer)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = lstm_bottleneck(lstm_units=2, time_steps=3)(x)
    # x = tfkl.LSTM(2, return_sequences=False)(x)
    # x = tfkl.RepeatVector(time_steps)(x)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = tfkl.LSTM(2, return_sequences=True)(x)
    x = tfk.layers.Dense(n_features)(x)
    lstm_ae = tfk.models.Model(inputs=input_layer, outputs=x)
    lstm_ae.compile(optimizer='adam', loss='mse')
    print(lstm_ae.summary())
    
    Model: "model_2"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_3 (InputLayer)         [(None, 3, 2)]            0         
    _________________________________________________________________
    masking_2 (Masking)          (None, 3, 2)              0         
    _________________________________________________________________
    lstm_10 (LSTM)               (None, 3, 2)              40        
    _________________________________________________________________
    lstm_bottleneck_3 (lstm_bott (None, 3, 2)              40        
    _________________________________________________________________
    lstm_12 (LSTM)               (None, 3, 2)              40        
    _________________________________________________________________
    lstm_13 (LSTM)               (None, 3, 2)              40        
    _________________________________________________________________
    dense_2 (Dense)              (None, 3, 2)              6         
    =================================================================
    Total params: 166
    Trainable params: 166
    Non-trainable params: 0
    _________________________________________________________________
    
    
    for i, l in enumerate(lstm_ae.layers):
        print(f'layer {i}: {l}')
        print(f'has input mask: {l.input_mask}')
        print(f'has output mask: {l.output_mask}')
    
    layer 0: <tensorflow.python.keras.engine.input_layer.InputLayer object at 0x64dbf98d0>
    has input mask: None
    has output mask: None
    layer 1: <tensorflow.python.keras.layers.core.Masking object at 0x64dbf9f60>
    has input mask: None
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 2: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x64dbf9550>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 3: <__main__.lstm_bottleneck object at 0x64dbf91d0>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 4: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x64e04ca20>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 5: <tensorflow.python.keras.layers.recurrent_v2.LSTM object at 0x64eeb8b00>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    layer 6: <tensorflow.python.keras.layers.core.Dense object at 0x64ef43208>
    has input mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    has output mask: Tensor("masking_2/Identity_1:0", shape=(None, 3), dtype=bool)
    
    

    As we can already see, the masks are now passed successfully to the output layer. We will also validate that the loss do not include the masked timesteps.

    # last timestep should be masked because all feature values are 1
    x = np.array([1, 2, 1, 2, 1, 1], dtype='float32').reshape(1, 3, 2)
    print(x)
    array([[[1., 2.],
            [1., 2.],
            [1., 1.]]], dtype=float32)
    
    y = lstm_ae.predict(x)
    print(y)
    array([[[ 0.00065455, -0.00294413],
            [ 0.00166675, -0.00742249],
            [ 0.00166675, -0.00742249]]], dtype=float32)
    
    # the expected loss should be the square error between the first 2 timesteps divided by 6
    expected_loss = np.square(x[:, :2, :] - y[:, :2, :]).sum()/6
    print(expected_loss)
    1.672815163930257
    
    # now the loss is correct with a custom layer
    actual_loss_with_masking = lstm_ae.evaluate(x=x, y=x)
    print(actual_loss_with_masking)
    1.672815203666687