Search code examples
reinforcement-learning

Relationship of Horizon and Discount factor in Reinforcement Learning


What is the connection between discount factor gamma and horizon in RL.

What I have learned so far is that the horizon is the agent`s time to live. Intuitively, agents with finite horizon will choose actions differently than if it has to live forever. In the latter case, the agent will try to maximize all the expected rewards it may get far in the future.

But the idea of the discount factor is also the same. Are the values of gamma near zero makes the horizon finite?


Solution

  • Horizon refers to how many steps into the future the agent cares about the reward it can receive, which is a little different from the agent's time to live. In general, you could potentially define any arbitrary horizon you want as the objective. You could define a 10 step horizon, in which the agent makes a decision that will enable it to maximize the reward it will receive in the next 10 time steps. Or we could choose a 100, or 1000, or n step horizon!

    Usually, the n-step horizon is defined using n = 1 / (1-gamma). Therefore, 10 step horizon will be achieved using gamma = 0.9, while 100 step horizon can be achieved with gamma = 0.99

    Therefore, any value of gamma less than 1 imply that the horizon is finite.