The programs I use are colab, vscode, vscode-jupyter, kaggle, pycharm. Pyton version 3.10.7 I tried to render the cartpole environment in every program I use. I tried many different gym package versions. But I can't. I wonder which python version, which gym version, or what are the things I should use in general. (IDE vs..)
Speaking for vscode, this is the code I ran and the output I got. (PACKETS => pygame=2.1.0, gym=0.26.1, gym-notices=0.0.8, python=3.10.7) VSCODE code
my code is working but what i want is to see this.I can't see that. i want to see
PyCharm is the same and Spyder is the same.
GOOGLE COLAB I am running the same code again.This is the outputcolab output
try the below code it will be train and save the model in specific folder in code.(can run in Google Colab too)
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.evaluation import evaluate_policy
import os
environment_name = "CartPole-v0"
env = gym.make(environment_name)
env = DummyVecEnv([lambda: env])
model = PPO('MlpPolicy', env, verbose = 1)
model.learn(total_timesteps=5000) #learn
ppo_path = os.path.join('Cartpole_model')
model.save(ppo_path)
after it Check the result by loading the saved model (To render Download saved model if you are in Google Colab, you have to run this in your own pc )
environment_name = "CartPole-v0" #Env
env = gym.make(environment_name,render_mode="human")
ppo_path = os.path.join('Cartpole_model')
model = PPO.load(ppo_path, env) #load model
obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
env.close()
(rendered output wont work in google colab, if you somehow linked vs code with google colab it wont output render but can train model !)