Search code examples
opencvcomputer-visionphotogrammetry

How will the Intrinsic matrix changes if the dimension of the 3d grid changes?


I have calibrated pictures around an object with unknown dimensions and points. The problem is that the extrinsics calibration is not fairly accurate as I don't exactly know the distance between the camera and the object that it turns around. I can find the known length of this object (thus the distance between two 3D points could be 25 thanks to triangulation and 30 in real life). How can I correct the intrinsic parameters of my camera calibration to minimize the geometric error?

I have some difficulties to know what terms I have to look for (to get results on Google that would help me). I found some minimization methods that requires a match between image points and corresponding 3d points (that I don't have) but this doesn't help me much.

Does just multiplying the distance with a computed scale factor works for all distances?

EDIT : There is a rigidity constrains. Only one camera takes all the pictures. So only one intrinsic matrix is needed.


Solution

  • Since you use camera in the singular, I assume this is not a stereo setup. If your camera is already well calibrated, i.e., you accurately know its intrinsic parameters, your problem is not one of calibration, but of scale:

    • If you know the size of your 3D object, you can accurately retrieve the motion of the camera around it - as accurately as you can match corresponding points on the object across images.
    • If you do not know its size, you can accurately reconstruct the camera rotations, but the translations only up to an unknown global scale