Search code examples
pythontensorflowmachine-learningreinforcement-learningopenai-gym

Why DQN for cartpole game has a ascending reward while loss is not descending?


I wrote a DQN to play the OpenAI gym cart pole game with TensorFlow and tf_agents. The code looks like the following:

def compute_avg_return(environment, policy, num_episodes=10):
    total_return = 0.0
    for _ in range(num_episodes):
        time_step = environment.reset()
        episode_return = 0.0
        while not time_step.is_last():
            action_step = policy.action(time_step)
            time_step = environment.step(action_step.action)
            episode_return += time_step.reward
        total_return += episode_return
    avg_return = total_return / num_episodes
    return avg_return.numpy()[0]


def collect_step(environment, policy, buffer):
    time_step = environment.current_time_step()
    action_step = policy.action(time_step)
    next_time_step = environment.step(action_step.action)
    traj = trajectory.from_transition(time_step, action_step, next_time_step)
    buffer.add_batch(traj)


def collect_data(env, policy, buffer, steps):
    for _ in range(steps):
        collect_step(env, policy, buffer)


def train_model(
    num_iterations=config.default_num_iterations,
    collect_steps_per_iteration=config.default_collect_steps_per_iteration,
    replay_buffer_max_length=config.default_replay_buffer_max_length,
    batch_size=config.default_batch_size,
    learning_rate=config.default_learning_rate,
    log_interval=config.default_log_interval,
    num_eval_episodes=config.default_num_eval_episodes,
    eval_interval=config.default_eval_interval,
    checkpoint_saver_directory=config.default_checkpoint_saver_directory,
    model_saver_directory=config.default_model_saver_directory,
    visualize=False,
    static_plot=False,
):
    env_name = 'CartPole-v0'
    train_py_env = suite_gym.load(env_name)
    eval_py_env = suite_gym.load(env_name)
    train_env = tf_py_environment.TFPyEnvironment(train_py_env)
    eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
    fc_layer_params = (100,)
    q_net = q_network.QNetwork(
        train_env.observation_spec(),
        train_env.action_spec(),
        fc_layer_params=fc_layer_params)
    optimizer = Adam(learning_rate=learning_rate)
    train_step_counter = tf.Variable(0)
    agent = dqn_agent.DqnAgent(
        train_env.time_step_spec(),
        train_env.action_spec(),
        q_network=q_net,
        optimizer=optimizer,
        td_errors_loss_fn=common.element_wise_squared_loss,
        train_step_counter=train_step_counter)
    agent.initialize()
    replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
        data_spec=agent.collect_data_spec,
        batch_size=train_env.batch_size,
        max_length=replay_buffer_max_length)
    dataset = replay_buffer.as_dataset(
        num_parallel_calls=3,
        sample_batch_size=batch_size,
        num_steps=2).prefetch(3)
    iterator = iter(dataset)
    agent.train_step_counter.assign(0)
    avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
    returns = []
    loss = []
    for _ in range(num_iterations):
        for _ in range(collect_steps_per_iteration):
            collect_step(train_env, agent.collect_policy, replay_buffer)
        experience, unused_info = next(iterator)
        train_loss = agent.train(experience).loss
        step = agent.train_step_counter.numpy()
        avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
        returns.append(avg_return)

Although the average reward is getting better and reached 200, the maximum score, in the end, the loss is not obviously decreasing.

Here is the loss plot:

loss plot

Here is the reward plot:

reward plot

The good point is that the model is successful, and it can play the game really well. However, I would really love to get some insight into why this is happening where an extremely high loss still yields a good reward.


Solution

  • It might be related to the scale of your Q-Values. I have the same behavior in my DQN loss, my agent easily solves the environment but the loss is growing through training.

    If you look at this part of the DQN algorithm you might get some insights:

    enter image description here

    • First you will notice that the target y is built upon the max Q-values of the target network. It could induce a constant overestimation of the target Q-value as it is demonstrated in the Double-DQN paper. Since the target could be constantly overestimated while the prediction is not, a delta will always exist between predictions and targets
    • Second, this delta will grow in scale as the Q-values grow too. I think it is a normal behavior since your Q function will learn that many states have an important value, so the error at the beginning of training might be way smaller than the error at the end
    • Third the target Q-network is frozen for some steps while the prediction Q-network changes constantly, that also contributes to this delta

    Hope this helps, note that it is a purely intuitive and personal explanation, I did not conduct any test to check my hypotheses. And I think that the second point might be the most important here.