Search code examples
3dkinect-sdkkinect-v2camera-projection

Kinect v2, projection of 3D point cloud into color image


I am using Kinect V2 to capture the 3D point cloud and its corresponding color image. In order to get a proper projection of some 3D model into this color image, I need to compute valid projection matrix from camera to image space.Since Kinect V2 SDK has no calibration information about RGB camera, I had found that there is a method called MapCameraPointsToColorSpace in coordinateMapper class.

This method returns a Lookup table which contains correspondence between each 3D points in the cloud and image pixels. From the table, I had tried to compute the RGB camera intrinsic matrix(focal length, principal points, image spacing factors). But there are some errors between the 2D points projected by using computed intrinsic matrix and the values in the Lookup table. I think that this error occurs because I didn`t count the radial distortion. Am I right? should I care about radial distortion to get the exact mapping between 3D to 2D color points through this Lookup table?


Solution

  • Yes, you are correct. Raw Kinect RGB image has a distortion. Best way is to first distort a blank image manually using the RGB camera intrinsic matrix and use it as a look up table.

    distort(int mx, int my, float& x, float& y) const
        {
            float dx = ((float)mx - depth.cx) / depth.fx;
            float dy = ((float)my - depth.cy) / depth.fy;
            float dx2 = dx * dx;
            float dy2 = dy * dy;
            float r2 = dx2 + dy2;
            float dxdy2 = 2 * dx * dy;
            float kr = 1 + ((depth.k3 * r2 + depth.k2) * r2 + depth.k1) * r2;
            x = depth.fx * (dx * kr + depth.p2 * (r2 + 2 * dx2) + depth.p1 * dxdy2) + depth.cx;
            y = depth.fy * (dy * kr + depth.p1 * (r2 + 2 * dy2) + depth.p2 * dxdy2) + depth.cy;
    }
    

    For more information