I'm watching the Berkely CS 294 course about Deep Reinforcement Learning. However, I meet some troubles on the assignment. I tried to implement the equation below. I think it is quite simple but I failed to obtain the expected result as showed in the comments. There must be something that I misunderstood. Details are shown in the code below. Can anyone help?
state value function http://quicklatex.com/cache3/4b/ql_a4e0ff64c86ce8e3e60f94cfb9fc4b4b_l3.png
Here is my code:
def compute_vpi(pi, P, R, gamma):
"""
:param pi: a deterministic policy (1D array: S -> A)
:param P: the transition probabilities (3D array: S*A*S -> R)
:param R: the reward function (3D array: S*A*S -> R)
:param gamma: the discount factor (scalar)
:return: vpi, the state-value function for the policy pi
"""
nS = P.shape[0]
# YOUR CODE HERE
############## Here is what I wrote ######################
vpi = np.zeros([nS,])
for i in range(nS):
for j in range(nS):
vpi[i] += P[i, pi[i], j] * (R[i, pi[i], j] + gamma*vpi[j])
##########################################################
# raise NotImplementedError()
assert vpi.shape == (nS,)
return vpi
pi0 = np.zeros(nS,dtype='i')
compute_vpi(pi0, P_rand, R_rand, gamma)
# Expected output:
# array([ 5.206217 , 5.15900351, 5.01725926, 4.76913715, 5.03154609,
# 5.06171323, 4.97964471, 5.28555573, 5.13320501, 5.08988046])
What I got:
array([ 0.61825794, 0.67755819, 0.60497582, 0.30181986, 0.67560153,
0.88691815, 0.73629922, 1.09325453, 1.15480849, 1.21112992])
Some Init code:
nr.seed(0) # seed random number generator
nS = 10
nA = 2
# nS: number of states
# nA: number of actions
R_rand = nr.rand(nS, nA, nS) # reward function
# R[i,j,k] := R(s=i, a=j, s'=k),
# i.e., the dimensions are (current state, action, next state)
P_rand = nr.rand(nS, nA, nS)
# P[i,j,k] := P(s'=k | s=i, a=j)
# i.e., dimensions are (current state, action, next state)
P_rand /= P_rand.sum(axis=2,keepdims=True) # normalize conditional probabilities
gamma = 0.90
Actually, the assignment 2 provided the solution, if someone else is learning this course online and meet some troubles, try to find some tips from the next assignment.