Search code examples
pythonmachine-learningkeraslayerseq2seq

Add more layers in seq2seq model


In the sample seq2seq code given by fchollet, how can I add more LSTM layers to the encoder and decoder? I'm having some trouble with the shapes and a bit confused in general. Thanks.


Solution

  • Keras' functional api lets you call layers. This lets you chain another layer on top the output of an existing layer by calling it. For example here:

    encoder_inputs = Input(shape=(None, num_encoder_tokens))
    encoder = LSTM(latent_dim, return_sequences=True)
    encoder_outputs, state_h, state_c = LSTM(latent_dim, return_state=True)(encoder(encoder_inputs))