Search code examples
kerascontrolsenvironmentreinforcement-learningopenai-gym

how to see what happens inside gym.make('env')


In order to make my own environment and use the some codes of github, I need to see what does happen inside gym.make('env') for instance gym.make('carpole0')

Where inside the gym github, I am able to find it? I found https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py but it does not have make?

How to write the update section of defining an environment (env) for DQN which is not in gym library? I am looking for an environment definition "env" example on github or another resource that is not designed for Atari games. I saw several models but most of them use OpenAI's gym library and are written for playing the Atari games which have relatively simple environments. I am looking for a game environment with more complicated states.

I want to write an update function (step function of environment) for the state t+1 based on state t. What is my problem is that if the state depends on more than one state before how do I implement that? I am looking for an example to demonstrate this. It seems to have an obligation to send the time t in environment.

It would be more helpful for me if an example is defined for an adaptive control problem.


Solution

  • Store all states of the environment that occur in an array or dictionary.

    If your environment needs access to previous states before t to determine the next state t + 1, those states will be in the array.

    # array that maintains list of all states the agents experiences
    states_experienced = []
    
    # each time a new state is encountered, add it to the array
    states_experienced.append(current_state)
    

    If order doesn't matter or you'd like to index the states by keys, you can use a dictionary instead.