Search code examples
c#opencvcomputer-visionaugmented-reality

Using OpenCV to Solve for Transform between non planar object Points and image points


QUESTION REPOSTED TO ADD CLARITY

I am working with OpenCV to try to calibrate a Laser Scanner.

I have a set of 2d points captured by the scanner. For the sake of the example, let's let these points be represented as follows:

IMAGE POINTS
{(0.08017784, -0.08993121, 0)}
{(-0.1127666, -0.08712908, 0)}
{(-0.1117229, 0.1782855, 0)}
{(0.09053531, 0.198439, 0)}

I know that these points correspond to the following real world points:

OBJECT POINTS
{(0, 0, 0)}
{(190, 0, 0)}
{(190, 260, 0)}
{(0, 260, 122)}

I have been using OpenCV To solve for a rotation and translation matrix that will allow me to give a world point (100, 200, 20, for example) and get back the 2d point in the captured coordinate system.

My results thus far have shown that if the object points are co-planar then OpenCV finds the rotation/translation results almost perfectly.

However, in problems like the example I gave above where not all the points lie on the same plane, I am getting wildly wrong answers.

I know that this is possible (not necessarily with openCV) because I have another commercial software that can do this. For reference, the solution to the above problem is the matrix:

SOLUTION
[-0.99668, 0.03056, 0.07543]
[ 0.05860, 0.91263, 0.40454]
[-0.05647, 0.40762,-0.91140]
[79.34385, -89.63855,-982.25938]

I am using a root mean square error to determine validity of results. The root mean square error for the solution I provided is 1.61560. While the result from OpenCV is over 1000.

THE QUESTION:

Using the given IMAGE POINTS and OBJECT POINTS how can one use OpenCV (or other methods) to arrive at the SOLUTION.

What I have already tried:

I have tried the basic SolvePNP from OpenCV like so:

Cv2.SolvePnP(objectPoints, imagePoints, camMatrix, dist, out double[] rvec, out double[] tvec, false, SolvePnPFlags.Iterative);

Documentation on SolvePNP Here

As stated above, this solution works if my object points are all planar. But with points on other planes the solution breaks down and is very wrong.

Thanks in advance!


Solution

  • (Sorry forgot to follow up on this)

    Calculate the determinant of the rotation matrix. It should be =1 for a 'correct' answer and -1 for the flipped one. Then simply multiply the rotation by a identity matrix where the last term is the determinant. this does nothing if the det() =1 but flips it back to the correct answer if the det() is = -1

    You might find the code/discussion on POSE ESTIMATION FOR PLANAR TARGET useful