Search code examples
deep-learningreinforcement-learningdqn

Deep Q Learning agent finds solution then diverges again


I am trying to train a DQN Agent to solve AI Gym's Cartpole-v0 environment. I have started with this person's implementation just to get some hands-on experience. What I noticed is that during training, after many episodes the agent finds the solution and is able to keep the pole upright for the maximum amount of timesteps. However, after further training, the policy looks like it becomes more stochastic and it can't keep the pole upright anymore and goes in and out of a good policy. I'm pretty confused by this why wouldn't further training and experience help the agent? At episodes my epsilon for random action becomes very low, so it should be operating on just making the next prediction. So why does it on some training episodes fail to keep the pole upright and on others it succeeds?

Here is a picture of my reward-episode curve during the training process of the above linked implementation.

enter image description here


Solution

  • This actually looks fairly normal to me, in fact I guessed your results were from CartPole before reading the whole question.

    I have a few suggestions:

    • When you're plotting results, you should plot averages over a few random seeds. Not only is this generally good practice (it shows how sensitive your algo is to seeds), it'll smooth out your graphs and give you a better understanding of the "skill" of your agent. Don't forget, the environment and the policy are stochastic, so it's not completely crazy that your agent exhibits this type of behavior.
    • Assuming you're implementing e-greedy exploration, what's your epsilon value? Are you decaying it over time? The issue could also be that your agent is still exploring a lot even after it found a good policy.
    • Have you played around with hyperparameters, like learning rate, epsilon, network size, replay buffer size, etc? Those can also be the culprit.