Search code examples
pythonopencvcomputer-visionstructure-from-motionreprojection-error

Why do triangulated points not project back to same image points in OpenCV?


I have two corresponding image points (2D) visualized by the same camera with intrinsic matrix K each coming from different camera poses (R1, t1, R2, t2). If I triangulate the corresponding image points to a 3D point and then reproject it back to the original cameras it only closely matches the original image point in the first camera. Can someone help me understand why? Here is a minimal example showing the issue:

import cv2
import numpy as np

# Set up two cameras near each other

K = np.array([
    [718.856 ,   0.  ,   607.1928],
    [  0.  ,   718.856 , 185.2157],
    [  0.  ,     0.   ,    1.    ],
])

R1 = np.array([
    [1., 0., 0.],
    [0., 1., 0.],
    [0., 0., 1.]
])

R2 = np.array([
    [ 0.99999183 ,-0.00280829 ,-0.00290702],
    [ 0.0028008  , 0.99999276, -0.00257697],
    [ 0.00291424 , 0.00256881 , 0.99999245]
])

t1 = np.array([[0.], [0.], [0.]])

t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])

P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])

P1 = K.dot(P1)
P2 = K.dot(P2)

# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])

# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)

# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)

p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)

# measure difference between original image point and reporjected image point 

reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])

print(reprojection_error1, reprojection_error2)

The reprojection error in the first camera is always good (< 1px) but the second one is always large.


Solution

  • Remember how you're constructing the projection matrix with the transpose of the rotation matrix combined with the negative of the translation vector. You must do the same thing when you're putting this into cv2.projectPoints.

    Therefore, take the transpose of the rotation matrix and put it into cv2.Rodrigues. Finally, supply the negative of the translation vector into cv2.projectPoints:

    # Reproject back into the two cameras
    rvec1, _ = cv2.Rodrigues(R1.T) # Change
    rvec2, _ = cv2.Rodrigues(R2.T) # Change
    
    p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
    p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change
    

    Doing this we now get:

    [[-12.19064      1.8813655   37.24711708]]
    0.009565768222768252 0.08597237597736622
    

    To be absolutely sure, here are the relevant variables:

    In [32]: p1
    Out[32]: array([[[371.91782052, 221.5253794 ]]])
    
    In [33]: p2
    Out[33]: array([[[368.3204979 , 224.92440583]]])
    
    In [34]: imagePoint1
    Out[34]: array([371.91915894, 221.53485107])
    
    In [35]: imagePoint2
    Out[35]: array([368.26071167, 224.86262512])
    

    We can see that the first few significant digits match and we expect that there is a slight loss in precision due to this being a least-squares solve to where the points triangulate to.