Search code examples
c++computer-visionrealsense

Depth Values Don't Make Sense R200 Camera


I am running the tutorial found here: https://software.intel.com/en-us/articles/using-librealsense-and-opencv-to-stream-rgb-and-depth-data

It gets the depth values from the r200 using the following lines:

   cv::Mat depth16( _depth_intrin.height, _depth_intrin.width, CV_16U,(uchar *)_rs_camera.get_frame_data( rs::stream::depth ) );
   cv::Mat depth8u = depth16;
   depth8u.convertTo( depth8u, CV_8UC1, 255.0/1000 );
   imshow( WINDOW_DEPTH, depth8u );

And the output image steam is:

https://i.sstatic.net/IapRX.jpg

You can see the color image as well. I've also put a tape measure across the bottom that goes as far as 3.5m (the range for the r200 is supposed to be up to 3.5m)

Why on earth is the color binary? I've tried adding different color images but it seems to not be depth values at all. Also it makes no sense that the floor is consistently black even though it spans from 1m to 5m away. Why are all objects white? The table and couch are obviously different distances away.

How can I improve this? I know you can get good depth values from the r200 as I get them in the examples. See (http://docs.ros.org/kinetic/api/librealsense/html/cpp-capture_8cpp_source.html) but these use glfw as opposed to OpenCV. I'm wondering why the depth values are so odd once theyve been converted.

Ideally i would like to generate depth values and filter any outside the range of 1m to 2m away. Thanks!


Solution

  • Edit: As @MSalters pointed out, the first half of my answer was erroneous and due to my misreading of the OP's code. The second half contains the right answer.

    If your depth range is 1-3.5m, measured in millimetres (1000mm-3500mm); dividing the result by 1000 will give you data in the range 1.0-3.5. However, your source data is a 16-bit unsigned type, which can't represent decimal or floating point values, only integers, so your values get truncated to one of {0,1,2,3}. You might get away with this in convertTo, as it may marshal the types internally, but it's a potential source of error.

    There is a second problem though... CV_8U is an 8-bit unsigned char, which can also only represent integer values, this time in the range from 0-255. Since your data can be in the range 0...3500, by multiplying by 0.255 as you do in your example, anything over 1000mm depth results in a value over 255 and so gets truncated there.

    Instead of converting the raw depth image as you are above, you could use the cv::normalize function, with the NORM_MINMAX normalisation-type to normalise your data down to the 0...255 range. You can set the destination image format to CV_8U too.

    This is probably only suitable for visualisation though, as it'll be affected by the source data input range. Instead, if you know your max value is 3500, and your min is 0, divide the source image by 3500 and multiply by 255. That said, where possible, it's probably best to keep it in the 16-bit format for the sake of depth resolution.