I am trying to solve the Farama gymnasium-robotic fetch environments, specifically the "FetchReachDense-v3" problem. When running the simulation, the base of the robotic arm seems to be misplaced:
This, firstly, looks weird, is not like that in the gymnasium-robotics documentation or other example solutions of this problem, and I think it messes up the problem as I'm not even sure if the arm is even able to reach some of the goal positions like this.
Running this code shows the occurring problem (at least for me):
import gymnasium as gym
import gymnasium_robotics
import numpy as np
# Creating environment
gym.register_envs(gymnasium_robotics)
env = gym.make("FetchReachDense-v3", render_mode="human")
observation, info = env.reset(seed=42)
print("Simulating with completely random actions")
# loop
summed_reward = 0
for _ in range(2000):
action = np.random.uniform(-1, 1,4) # random action
observation, reward, terminated, truncated, info = env.step(action) # Calculate next step of simulation.
summed_reward += reward # sum rewards
if terminated or truncated:
print("summed reward =", summed_reward)
summed_reward = 0
observation, info = env.reset()
env.close()
There also seems to be something else wrong. When I instead run 'FetchPickAndPlaceDense-v3' (where the base is also misplaced) there is no object generated which is supposed to be picked up. That makes the whole environment quite useless.
I run python==3.11 with gymnasium==1.0.0, gymnasium-robotics=1.3.1, mujoco==3.2.7, and numpy==2.2.1. The problem also occurred with numpy=2.1.3 and with an older version mujoco (not sure about the exact version anymore).
Do you have ideas what I'm doing wrong here? Or is this actually an issue with gymnasium-robotics?
Thanks in advance and if you need any other info, let me know.
I found the problem. Apparently the gymnasium-robotics version (1.3.1) that is accessible through pip / pypi has versions v1 and v3 of the FetchReach environments (for me, it was not possible to run v2 even though it is mentioned in the documentation). When I went through the code on GitHub I saw that there also is a version 4 of this environment. :
Version History
* v4: Fixed bug where initial state did not match initial state description in documentation. Fetch environments' initial states after reset now match the documentation (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium-Robotics/issues/251)). * v3: Fixed bug: `env.reset()` not properly resetting the internal state. Fetch environments now properly reset their state (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium-Robotics/issues/207)). * v2: the environment depends on the newest [mujoco python bindings](https://mujoco.readthedocs.io/en/latest/python.html) maintained by the MuJoCo team in Deepmind. * v1: the environment depends on `mujoco_py` which is no longer maintained.
For me, it was not able to run version 4 with the installation through pip. After installing directly from GitHub (weirdly also version 1.3.1 of gymnasium-robotics) I can run version 4 and the problem is fixed.
To install from GitHub run the following code in bash:
git clone https://github.com/Farama-Foundation/Gymnasium-Robotics.git
cd Gymnasium-Robotics
pip install -e .