I am currently working on a problem where I have created an uint16 image of type CV_16UC1 based on Velodyne data where lets say 98% of the pixels are black (value 0) and the remaining pixels have the metric depth information (distance to that point). These pixels correspond to the velodyne points from the cloud.
cv::Mat depthMat = cv::Mat::zeros(frame.size(), CV_16UC1);
depthMat = ... //here the matrice is filled
If I try to display this image I get this:
On the image you can see that the brightest(white) pixels correspond to the pixels with biggest depth.From this I need to get a denser depth image or smth that would resemble a proper depth image like in the example shown on this video:
https://www.youtube.com/watch?v=4yZ4JGgLE0I
This would require proper interpolation and extrapolation of those points (the pixels of the 2D image) and it is here is where I am stuck. I am a beginner when it comes to interpolation techniques. Does anyone know how this can be done or at least can point me to a working solution or example algorithm for creating a depth map from sparse data?
I tried the following from the Kinect examples but it did not change the output:
depthMat.convertTo(depthf, CV_8UC1, 255.0/65535);
const unsigned char noDepth = 255;
cv::Mat small_depthf, temp, temp2;
cv::resize(depthf, small_depthf, cv::Size(), 0.01, 0.01);
cv::inpaint(small_depthf, (small_depthf == noDepth), temp, 5.0, cv::INPAINT_TELEA);
cv::resize(temp, temp2, depthf.size());
temp2.copyTo(depthf, (depthf == noDepth));
cv::imshow("window",depthf);
cv::waitKey(3);
I managed to get the desired output(something that resembles a depth image) by simply using dilation on the sparse depth image:
cv::Mat result;
dilate(depthMat, result, cv::Mat(), cv::Point(-1, -1), 10, 1, 1);