Search code examples
matlabopencvgraphics3dprojective-geometry

How to implement 3d reconstruction algorithms


I have been doing some study on 3d reconstruction from multiple 2d views of late. Most of what i have read focuses on fundamental matrix , epipolar geometry and other theoretical principles of the subject. Let us say that given two images, I know how to calculate the 3d point corresponding to each 2d point.

My question here is:

  1. Which software or libraries should I use for displaying the 3d model?
  2. How do I represent 3d model ?

I do know that MATLAB or OpenCV can be used, but i did not find anything that discusses how to do it.


Solution

  • Check the disparity map in OpenCV. You can use to generate depth maps (similar to those you get from - let's say - a Kinect but less accurate obviously). Each pixel in a disparity map represents the distance to the object based on the differences between the two frames used to generate the map.

    There is an example in the OpenCV samples where you can get an idea how it's done.

    As for the representation of the 3D data I would suggest PCL (Point Cloud Library) or any other library that works with point clouds because...Well, this is a sort of a practice nowadays. Point clouds allow you to apply various algorithms on the space data (including feature matching, merging, transformation etc.) plus the ability to generate meshes. PCL for instance has - if I remember correctly - at least 3 ways of generating a mesh from a point cloud (the NURBS module is sadly still experimental and buggy).