Search code examples
pythontensorflowkerasreinforcement-learningdqn

Keras model suddenly started outputting Tensors. How to revert that?


So I was learning DQNs trying to solve Cart Pole env:

import gymnasium as gym
import numpy as np
from rl.agents import DQNAgent
from rl.memory import SequentialMemory
from rl.policy import BoltzmannQPolicy
from tensorflow.python.keras.layers import InputLayer, Dense
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.optimizer_v2.adam import Adam

if __name__ == '__main__':
    env = gym.make("CartPole-v1")

    # tensorflow.compat.v1.experimental.output_all_intermediates(True)

    model = Sequential()
    model.add(InputLayer(input_shape=(1, 4)))
    model.add(Dense(24, activation="relu"))
    # model.add(GRU(24))
    model.add(Dense(24, activation="relu"))
    model.add(Dense(env.action_space.n, activation="linear"))
    model.build()

    print(model.summary())

    agent = DQNAgent(
        model=model,
        memory=SequentialMemory(limit=50000, window_length=1),
        policy=BoltzmannQPolicy(),
        nb_actions=env.action_space.n,
        nb_steps_warmup=100,
        target_model_update=0.01
    )

    agent.compile(Adam(learning_rate=0.001), metrics=["mae"])
    agent.fit(env, nb_steps=100000, visualize=False, verbose=1)

    results = agent.test(env, nb_episodes=10, visualize=True)
    print(np.mean(results.history["episode_reward"]))

    env.close()

Everything was fine, I was able to solve this env, but at some point I wanted to try adding GRU layer to see how this will affect learning. Then to make it work I used tensorflow.compat.v1.experimental.output_all_intermediates(True). And now, even without GRU layer, I get the following error:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense (Dense)                (None, 1, 24)             120
_________________________________________________________________
dense_1 (Dense)              (None, 1, 24)             600
_________________________________________________________________
dense_2 (Dense)              (None, 1, 2)              50
=================================================================
Total params: 770
Trainable params: 770
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
  File "cart_pole.py", line 25, in <module>
    agent = DQNAgent(
  File "lib\site-packages\rl\agents\dqn.py", line 107, in __init__
    raise ValueError(f'Model output "{model.output}" has invalid shape. DQN expects a model that has one dimension for each action, in this case {self.nb_actions}.')
ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(None, 1, 2), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case 2.

What I'm assuming is happening is that adding tensorflow.compat.v1.experimental.output_all_intermediates(True) made my model output Tensor instead of what it was outputting before. Passing False or None to output_all_intermediates have no effect at all. How do I revert my model to work with DQN agent again?


Solution

  • I think issue is related to model.add(InputLayer(input_shape=(1, 4))). try input_shape=(4,).