Search code examples
opencvrospoint-cloud-libraryroboticsgazebo-simu

How can i find the position of "boundary boxed" object with lidar and camera?


This question is related to my final project. In gazebo simulation environment, I am trying to detect obstacles' colors and calculate the distance between robot and obstacles. I am currently identifying their colors with the help of OpenCV methods (object with boundary box) but I don't know how can i calculate their distances between robot. I have my robot's position. I will not use stereo. I know the size of the obstacles. Waiting for your suggestions and ideas. Thank you!

My robot's topics :

  • cameras/camera/camera_info (Type: sensor_msgs/CameraInfo)
  • cameras/camera/image_raw (Type: sensor_msgs/Image)
  • sensors/lidars/points (Type: sensor_msgs/PointCloud2)

Solution

  • You can project the point cloud into image space, e.g., with OpenCV (as in here). That way, you can filter all points that are within the bounding box in the image space. Of course, projection errors because of differences between both sensors need to be addressed, e.g., by removing the lower and upper quartile of points regarding the distance to the LiDAR sensor. You can use the remaining points to estimate the distance, eventually.

    We have such a system running and it works just fine.