I am currently working on pose estimation of one camera with respect to another using opencv, in a setup where camera1 is fixed and camera2 is free to move. I know the intrinsics of both the cameras. I have the pose estimation module using epipolar geometry and computing essential matrix using the five-point algorithm to figure out the R and t of camera2 with respect to camera1; but I would like to get the metric translation. To help achieve this, I have two GPS modules, one on camera1 and one on camera2. For now, if we assume camera1's GPS is flawless and accurate; camera2's GPS exhibits some XY noise, I would need a way to use the opencv pose estimate on top of this noisy GPS to get the final accurate translation.
Given that info, my question has two parts:
1) No, bundle adjustment has another function and you would not be able to work with it anyway because you would have an unknown scale for every pair you use with 5-point. You should instead use a perspective-n-point algorithm after the first pair of images.
2) Yes, it's called sensor fusion and you need to first calibrate (or know) the transformation between your GPS sensor coordinates and your camera coordinates. There is an open source framework you can use.