I have a drone in a Gazebo environment with a RealSense d435 camera on it. My plan is to use YOLO to find the center of an object of interest, and then find the depth of that point from the depth image. I heard that the depth camera outputs an image where the depth values are encoded in the RGB values. When further looking this up online, I found that there is a pyrealsense2 library that has functions for everything I need.
The implementations I've seen online need you to create a pyrealsense.pipeline()
and get your frames from that. The issue is this seems to only work if you have a RealSense camera connected to your computer. Since mine exists in the Gazebo environment, I need a way to get and use the depth frame in a ROS callback. How would I do this? Any pointers would be greatly appreciated
yeah, you can do this with help of a ROS subscriber as follows (most of the code was taken from here):
import rospy
from sensor_msgs.msg import Image as msg_Image
from cv_bridge import CvBridge, CvBridgeError
import sys
import os
class ImageListener:
def __init__(self, topic):
self.topic = topic
self.bridge = CvBridge()
self.sub = rospy.Subscriber(topic, msg_Image, self.imageDepthCallback)
def imageDepthCallback(self, data):
try:
cv_image = self.bridge.imgmsg_to_cv2(data, data.encoding)
pix = (data.width/2, data.height/2)
sys.stdout.write('%s: Depth at center(%d, %d): %f(mm)\r' % (self.topic, pix[0], pix[1], cv_image[pix[1], pix[0]]))
sys.stdout.flush()
except CvBridgeError as e:
print(e)
return
if __name__ == '__main__':
rospy.init_node("depth_image_processor")
topic = '/camera/depth/image_rect_raw' # check the depth image topic in your Gazebo environmemt and replace this with your
listener = ImageListener(topic)
rospy.spin()
Note: To install CvBridge, you may follow the instructions below:
sudo apt-get install ros-(ROS version name)-cv-bridge
sudo apt-get install ros-(ROS version name)-vision-opencv
More information: http://wiki.ros.org/cv_bridge