Search code examples
algorithmmachine-learningartificial-intelligencereinforcement-learningq-learning

Criteria for convergence in Q-learning


I am experimenting with the Q-learning algorithm. I have read from different sources and understood the algorithm, however, there seem to be no clear convergence criteria that is mathematically backed.

Most sources recommend iterating several times (example, N = 1000), while others say convergence is achieved when all state and action pairs (s, a) are visited infinitely often. But the question here is, how much is infinitely often. What is the best criteria for someone who wants to solve the algorithm by hand?

I would be grateful if someone could educate me on this. I would also appreciate any articles to this effect.

Regards.


Solution

  • Q-Learning was a major breakthrough in reinforcement learning precisely because it was the first algorithm with guaranteed convergence to the optimal policy. It was originally proposed in (Watkins, 1989) and its convergence proof was refined in (Watkins & Dayan, 1992).

    In short, two conditions must be met to guarantee convergence in the limit, meaning that the policy will become arbitrarily close to the optimal policy after an arbitrarily long period of time. Note that these conditions say nothing about how fast the policy will approach the optimal policy.

    1. The learning rates must approach zero, but not too quickly. Formally, this requires that the sum of the learning rates must diverge, but the sum of their squares must converge. An example sequence that has these properties is 1/1, 1/2, 1/3, 1/4, ...
    2. Each state-action pair must be visited infinitely often. This has a precise mathematical definition: each action must have a non-zero probability of being selected by the policy in every state, i.e. π(s, a) > 0 for all (s, a). In practice, using an ε-greedy policy (where ε > 0) ensures that this condition is satisfied.