i am completely new to the field of LSTMs. Are there any tips to optimize my autoencoder for task to reconstruct sequences of len = 300
Bottleneck Layer should have 10-15 neurons
model = Sequential()
model.add(LSTM(128, activation='relu', input_shape=(timesteps,1), return_sequences=True))
model.add(LSTM(64, activation='relu', return_sequences=False))
model.add(RepeatVector(timesteps))
model.add(LSTM(64, activation='relu', return_sequences=True))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mae')
code is copied from: https://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352
At the moment the result is only a sequence of nan: [nan, nan, nan ... nan, nan]
Sequences look similar to the picture below:
I believe the activation function you are using here i.e. 'relu' might be killing the gradients. Try using other activation functions such as 'tanh' suitable for your data.