Search code examples
kerastensorflow2.0reinforcement-learningkeras-rl

Keras LSTM layers in Keras-rl


I am trying to implement a DQN agent using Keras-rl. The problem is that when I define my model I need to use an LSTM layer in the architecture:

model = Sequential()
model.add(Flatten(input_shape=(1, 8000)))
model.add(Reshape(target_shape=(200, 40)))
model.add(LSTM(20))
model.add(Dense(3, activation='softmax'))
return model

Executing the rl-agent I obtain the following error:

RuntimeError: Attempting to capture an EagerTensor without building a function.

Which is related to the use of the LSTM and to the following line of code:

tf.compat.v1.disable_eager_execution()

Using a Dense layer instead of an LSTM:

model = Sequential()
model.add(Flatten(input_shape=(1, 8000)))
model.add(Dense(20))
model.add(Dense(3, activation='softmax'))
return model

and maintaining eager execution disabled I don't have the previously reported error. If I delete the disabling of the eager execution with the LSTM layer I have other errors.

Can anyone help me to understand the reason of the error?


Solution

  • The keras-rl library does not have explicit support for TensorFlow 2.0, so it will not work with such version of TensorFlow. The library is sparsely updated and the last release is around 2 years old (from 2018), so if you want to use it you should use TensorFlow 1.x