Search code examples
pythonmachine-learningopenai-gymq-learning

OpenAI Gym - Maze - Using Q learning- "ValueError: dir cannot be 0. The only valid dirs are dict_keys(['N', 'E', 'S', 'W'])."


I'm trying to train an agent using Q learning to solve the maze.
I created the environment using:

import gym
import gym_maze 
import numpy as np

env = gym.make("maze-v0")

Since the states are in [x,y] coordinates and I wanted to have a 2D Q learning table, I created a dictionary that maps each state to a value:

states_dic = {}
count = 0
for i in range(5):
    for j in range(5):
        states_dic[i, j] = count
        count+=1

Then I created the Q table:

n_actions = env.action_space.n

#Initialize the Q-table to 0
Q_table = np.zeros((len(states_dic),n_actions))
print(Q_table)

[[0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]]

Some variables:

#number of episode we will run
n_episodes = 10000
#maximum of iteration per episode
max_iter_episode = 100
#initialize the exploration probability to 1
exploration_proba = 1
#exploartion decreasing decay for exponential decreasing
exploration_decreasing_decay = 0.001
# minimum of exploration prob
min_exploration_proba = 0.01
#discounted factor
gamma = 0.99
#learning rate
lr = 0.1

rewards_per_episode = list()

But when I try to run the Q learning algorithm I get the error in the title.

#we iterate over episodes
for e in range(n_episodes):
    #we initialize the first state of the episode
    current_state = env.reset()
    done = False
    
    #sum the rewards that the agent gets from the environment
    total_episode_reward = 0

    for i in range(max_iter_episode): 
        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            action = np.argmax(Q_table[current_state,:])
            
        next_state, reward, done, _ = env.step(action)

        current_coordinate_x = int(current_state[0])
        current_coordinate_y = int(current_state[1])

        next_coordinate_x = int(next_state[0])
        next_coordinate_y = int(next_state[1])


        # update Q-table using the Q-learning iteration    
        current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]
        next_Q_table_coordinates = states_dic[next_coordinate_x, next_coordinate_y]
        
        Q_table[current_Q_table_coordinates, action] = (1-lr) *Q_table[current_Q_table_coordinates, action] +lr*(reward + gamma*max(Q_table[next_Q_table_coordinates,:]))
    
        total_episode_reward = total_episode_reward + reward
        # If the episode is finished, we leave the for loop
        if done:
            break
        current_state = next_state
    #We update the exploration proba using exponential decay formula 
    exploration_proba = max(min_exploration_proba,\
                            np.exp(-exploration_decreasing_decay*e))
    rewards_per_episode.append(total_episode_reward)

Update:
Sharing the full error traceback:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-11-74e6fe3c1212> in <module>()
     25         # The environment runs the chosen action and returns
     26         # the next state, a reward and true if the epiosed is ended.
---> 27         next_state, reward, done, _ = env.step(action)
     28 
     29         ####    ####    ####    ####

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym/wrappers/time_limit.py in step(self, action)
     14     def step(self, action):
     15         assert self._elapsed_steps is not None, "Cannot call env.step() before calling reset()"
---> 16         observation, reward, done, info = self.env.step(action)
     17         self._elapsed_steps += 1
     18         if self._elapsed_steps >= self._max_episode_steps:

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym_maze-0.4-py3.6.egg/gym_maze/envs/maze_env.py in step(self, action)
     75             self.maze_view.move_robot(self.ACTION[action])
     76         else:
---> 77             self.maze_view.move_robot(action)
     78 
     79         if np.array_equal(self.maze_view.robot, self.maze_view.goal):

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym_maze-0.4-py3.6.egg/gym_maze/envs/maze_view_2d.py in move_robot(self, dir)
     93         if dir not in self.__maze.COMPASS.keys():
     94             raise ValueError("dir cannot be %s. The only valid dirs are %s."
---> 95                              % (str(dir), str(self.__maze.COMPASS.keys())))
     96 
     97         if self.__maze.is_open(self.__robot, dir):

ValueError: dir cannot be 1. The only valid dirs are dict_keys(['N', 'E', 'S', 'W']).

2nd update: Fixed thanks to some of @Alexander L. Hayes debugging.

#we iterate over episodes
for e in range(n_episodes):
    #we initialize the first state of the episode
    current_state = env.reset()
    done = False
    
    #sum the rewards that the agent gets from the environment
    total_episode_reward = 0

    for i in range(max_iter_episode): 
        current_coordinate_x = int(current_state[0])
        current_coordinate_y = int(current_state[1])
        current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]

        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            action = int(np.argmax(Q_table[current_Q_table_coordinates]))


        next_state, reward, done, _ = env.step(action)

        next_coordinate_x = int(next_state[0])
        next_coordinate_y = int(next_state[1])


        # update our Q-table using the Q-learning iteration
        next_Q_table_coordinates = states_dic[next_coordinate_x, next_coordinate_y]
        
        Q_table[current_Q_table_coordinates, action] = (1-lr) *Q_table[current_Q_table_coordinates, action] +lr*(reward + gamma*max(Q_table[next_Q_table_coordinates,:]))
    
        total_episode_reward = total_episode_reward + reward
        # If the episode is finished, we leave the for loop
        if done:
            break
        current_state = next_state
    #We update the exploration proba using exponential decay formula 
    exploration_proba = max(min_exploration_proba,\
                            np.exp(-exploration_decreasing_decay*e))
    rewards_per_episode.append(total_episode_reward)


    

Solution

  • First Guess (related to the answer, but not the answer):

    In gym's environments (e.g. FrozenLake) discrete actions are usually encoded as integers.

    It looks like the error is caused by a non-standard way that this environment represents actions.

    I've annotated what I assume the types might be when the action variable is set:

    if np.random.uniform(0,1) < exploration_proba:
        # Is this a string?
        action = env.action_space.sample()
    else:
        # np.argmax returns an int
        action = np.argmax(Q_table[current_state,:])
    

    Replacing the else branch with something like this might work:

    _action_map = {0: "N", 1: "E", 2: "S", 3: "W"}
    
    action = _action_map[np.argmax(Q_table[current_state,:])]
    

    Second Guess (not even close, but good for context):

    It looks like this is working out of the MattChanTK/gym-maze repository.


    Third Guess (really close):

    I've narrowed in on an issue with selection from the Q function. Here's a modified version where I've added breakpoints:

    for e in range(n_episodes):
        current_state = env.reset()
        done = False
        total_episode_reward = 0
    
        for i in range(max_iter_episode):
            if np.random.uniform(0,1) < exploration_proba:
                action = env.action_space.sample()
            else:
                print("From Q_table:")
                action = np.argmax(Q_table[current_state,:])
                import pdb; pdb.set_trace()
    

    Solution (I can't take credit, @Penguin got it ☺️)

    Convert the current_state into coordinates, and cast np.argmax as an int:

    for i in range(max_iter_episode): 
        current_coordinate_x = int(current_state[0])
        current_coordinate_y = int(current_state[1])
        current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]
    
        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            action = int(np.argmax(Q_table[current_Q_table_coordinates]))