I'm currently trying to get the relative position of two Kinect v2s by getting the position of a tracking pattern both cameras can see. Unfortunately I can't seem to get the correct position of the patterns origin.
This is my current code to get the position of the pattern relative to the camera:
std::vector<cv::Point2f> centers;
cv::findCirclesGrid( registeredColor, m_patternSize, centers, cv::CALIB_CB_ASYMMETRIC_GRID );
cv::solvePnPRansac( m_corners, centers, m_camMat, m_distCoeffs, m_rvec, m_tvec, true );
// calculate the rotation matrix
cv::Matx33d rotMat;
cv::Rodrigues( m_rvec, rotMat );
// and put it in the 4x4 transformation matrix
transformMat = matx3ToMatx4(rotMat);
for( int i = 0; i < 3; ++i )
transformMat(i,3) = m_tvec.at<double>(i);
transformMat = transformMat.inv();
cv::Vec3f originPosition( transformMat(0,3), transformMat(1,3), transformMat(2,3) );
Unfortunately, when I compare originPosition
to the point in the pointcloud that corresponds to the origin found in screenspace (saved in centers.at(0)
above) I get a very different result.
The Screenshot below shows the pointcloud from the kinect with the point at the screenspace position of the pattern's origin in red in the red circle and the point at originPosition
in light blue in the light blue circle. The screenshot was taken from directly in front of the pattern. The originPosition
is also a bit more to the front.
As you can see, the red dot is perfectly in the first circle of the pattern while the blue dot corresponding to originPosition
is not even close. Especially it is definitely not just a scaling issue of the vector from camera to origin. Also, the findCirclesGrid
is done on the registered color image and the intrinsic parameters are taken from the camera itself to ensure that there is no difference in those between the image and the calculation of the point cloud.
You have transormation P->P' given by R|T, To get inverse transformation P'->P given by R'|T' just do:
R' = R.t();
T' = -R'* T;
And then
P = R' * P' + T'