I've got a pair of cameras calibrated, knowing their intrinsic and extrinsic parameters. Knowing that they are both looking to a plane, if I define some points in one of the image, how can I get the point in the other image?
The cameras are pretty close one form the other, so suppose there is not any occlusion, both can see the same object.
Is there an openCV unction or set of functions to do this? My point is on Z=0 in the world.
Basically:
P_CAM1=(200,300) -> P_CAM2= ?
The answer:
cv::undistort
P1ccdf = A^(-1)*P1
-> P1 point in f=1 world ref coords.Z=0
for that:
Copt1w=-R1t*T1
V1w=R1T*P1ccdf
Coptw1+lambda*V1w=[Pxw,Pyw,0]T
-> lambda=-cpotw(z)/V1(z)
Coptw1+lambda*V1w=[Pxw,Pyw,0]T=P1w
P1ccd2f = R2T2 *[Pxw Pyw 0 1]T
P1ccd2f=P1ccd2f/P1ccd2f(3)
P2=A2*P1ccd2f
where
P
means point
A
is the intrinsic matrix (4x4)
RT
is the camera matrix (3x4)