Search code examples
c++opencvcomputer-vision3d-reconstruction

How to estimate camera pose matrix of a new image I, from a known 3D point cloud (built without I) using OpenCV


I have the following problem: given a 3D point cloud, its set of views V with known poses, and a view vV (i.e. with completely unknown pose), how to estimate the camera pose matrix of v avoiding to run the reconstruction again with V ∪ {v}?

I am trying to solve this in OpenCV 3.2, however any idea, intuition or pseudocode that you can provide me would be very useful. Thanks!


Solution

  • Well, you obviously need to establish image point correspondences between the new view and the old ones using the point cloud, e.g. by matching image descriptors (SURF, ORB, ...) associated to the projections of cloud points in the old images, and matching them to interest points extracted in the new one.

    You can then go through the usual process of removing outliers using the 5 or 8 point algorithm. Once you have good correspondences, you can just use solvePnP from the cloud points to their matched locations in the new image.

    Note that this is essentially what VSLAM algorithms do for all "new" images when there is no need to relocalize.