Is there an example somewhere that tells you how to visualise the RGBD information coming out of the RGBD sensor. Furthermore, is there a way of getting a more detailed rendering of the scene than meshcat? I am happy with using meshcat to debug and analyse kinematics and dynamics, however, if I want to use the RGBD sensor information to train a model I was hoping to get something more realistic. Any pointers?
First, some good news: A tutorial is currently in development to help guide you in producing better images.
The better news, even without the tutorial, the functionality is in place in which you can produce better images. You'll want to do two things to make pretty pictures:
- Make sure you have good models (Drake prefers nice glTF models with PBR materials). Failing that, you can still use simple primitives and objs and switch the lighting model to PBR (physically-based rendering).
- Configure
RenderEngineVtk
appropriately.
- Add lights
- Enable shadows, etc.
- Add an environment map.
- Balance environment map intensity, light intensity, and camera exposure.
- (The in-work tutorial is all about teaching how best to go about this kind of configuraiton.)
Finally, to see the images you have two primary means:
- (If you're on linux -- not mac) Configure RenderEngineVtkParams to use
backend = GLX
and set your cameraConfig to have show_rgb = true
. This will cause a window to pop up which displays the rendering frame buffer.
- Note: if you're rendering multiple cameras, this will thrash the buffer. So, it works best with a single camera.
- If you have shadows, this window will be square even if your camera aspect ratio is rectangular. This won't affect the actual images created in simulation and is merely an artifact of working around a VTK but.
- Attach an image writer to your
RgbdSensor
in your diagram.
- This will allow you to simply dump images to disk at an arbitrary rate. Not interactive, but you get a clear record of what the sensors produced.
- If you're dumping out depth or label images, you might want to run the sensor's output through either a
ColorizeDepthImage
or ColorizeLabelImage
system before writing to disk for depth and label images, respectively.