I am experimenting with the Q-learning algorithm. I have read from different sources and understood the algorithm, however, there seem to be no clear convergence criteria that is mathematically backed.
Most sources recommend iterating several times (example, N = 1000), while others say convergence is achieved when all state and action pairs (s, a) are visited infinitely often. But the question here is, how much is infinitely often. What is the best criteria for someone who wants to solve the algorithm by hand?
I would be grateful if someone could educate me on this. I would also appreciate any articles to this effect.
Regards.
Q-Learning was a major breakthrough in reinforcement learning precisely because it was the first algorithm with guaranteed convergence to the optimal policy. It was originally proposed in (Watkins, 1989) and its convergence proof was refined in (Watkins & Dayan, 1992).
In short, two conditions must be met to guarantee convergence in the limit, meaning that the policy will become arbitrarily close to the optimal policy after an arbitrarily long period of time. Note that these conditions say nothing about how fast the policy will approach the optimal policy.
1/1, 1/2, 1/3, 1/4, ...
π(s, a) > 0
for all (s, a)
. In practice, using an ε-greedy policy (where ε > 0
) ensures that this condition is satisfied.