Search code examples
matrix3dcomputer-visionaugmented-realitymatlab-cvst

Find my camera's 3D position and orientation according to a 2D marker


I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:

My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:

illustration

On the image, we know the a,b,c,d distances and the coordinates of the red dots.

What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).

Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?


Solution

  • You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.

    Note that this code is released for research/evaluation purposes only, as mentioned on this page.

    EDIT:

    I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.