Search code examples
c++opencvdisparity-mapping

Depth/Disparity Map from a moving camera in OpenCV


Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.

I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following: first image,second image, disparity map (colour),disparity map

My code is as following:

    GpuMat leftGPU;
    GpuMat rightGPU;
    leftGPU.upload(left);rightGPU.upload(right);
    GpuMat disparityGPU;
    GpuMat disparityGPU2;
    Mat disparity;Mat disparity1,disparity2;
    Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
    stereo->setMinDisparity(-39);
        stereo->setPreFilterCap(61);
        stereo->setPreFilterSize(3);
        stereo->setSpeckleRange(1);
        stereo->setUniquenessRatio(0);
    stereo->compute(leftGPU,rightGPU,disparityGPU);
    drawColorDisp(disparityGPU, disparityGPU2,256);
    disparityGPU.download(disparity);
    disparityGPU2.download(disparity2);
    imshow("display img",disparityGPU);

how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?


Solution

  • It is possible if the object is static.

    To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)

    Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.

    Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.