I am trying to compute the relative pose between two cameras using their captured images through the usual way of feature correspondences. I use these feature matches to compute the essential matrix, decomposing which results in the rotation and translation between both the views. I am currently using findEssentialMat
and recoverPose
functions in OpenCV to achieve this. Once I compute this relative pose:
How can I find the uncertainty of this measurement? Should I try to refine the essential matrix itself (using the epipolar error), which results in the essential matrix's covariance and is it possible to find the pose covariance from this? Or is there another way to find the uncertainty in this pose directly?
There is also another issue in play here: While I am computing the relative pose of camera C2 (call it P2) from camera C1, the pose of camera C1 (say P1) would have its own covariance. How does this affect P2?
1) You should refine your pose estimation directly through bundle adjustment, and compute the Hessian of the cost function at the optimum, whose inverse will yield the covariance you seek. Some BA packages (e.g. Ceres) have APIs to facilitate that.
2) Irrelevant. Lacking an absolute reference, all you can hope for is an estimation of the uncertainty on the relative pose. Putting it in another way, if all you have is measurements of relative motion between two cameras, you may as well assume that one is certain and attribute the motion uncertainty entirely to the pose of the other.