Search code examples
c++point-cloud-librarypoint-cloudsnormals

Rendering mesh normals as normal maps with PCL


I am trying to generate normal maps given a mesh, camera pose, and camera intrinsics.

My plan is to calculate the vertex normal for each point in the cloud then project them onto an image plane with the corresponding camera pose and intrinsics. More specifically, I would first calculate the vertex normals then convert the point coordinates from world coordinates into camera coordinates with camera pose. Finally, using the camera intrinsics, the point cloud can be projected onto an image where each pixel represents the surface normal of the corresponding 3D vertex.

Below is my code:

#include <iostream>
#include <thread>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/features/from_meshes.h>
#include <pcl/visualization/pcl_visualizer.h>

using namespace std;
using namespace pcl;

void readPLY(PolygonMesh::Ptr mesh, string fname, bool printResult=false)
{
    PLYReader reader;
    int success = reader.read(fname, *mesh); // load the file
    if (success == -1) {
        cout << "Couldn't read file " << fname << endl;
        exit(-1);
    }

    if(printResult){
        cout << "Loaded "
        << mesh->cloud.width * mesh->cloud.height
        << " data points from "
        << fname
        << " with the following fields: "
        << endl;

        // convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
        PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
        fromPCLPointCloud2(mesh->cloud, *cloud);

        // print the first 10 vertices
        cout << "Vertices:" << endl;
        for (size_t i=0; i<10; ++i)
            cout << "    " << cloud->points[i].x
            << " "    << cloud->points[i].y
            << " "    << cloud->points[i].z << endl;

        // print the first 10 polygons
        cout << "Polygons:" << endl;
        for (size_t i=0; i<10; ++i){
            cout << mesh->polygons[i] << endl;
        }
    }
}

void computeNormal(PolygonMesh::Ptr mesh,
                   PointCloud<Normal>::Ptr normal,
                   bool printResult=false)
{
    // convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
    PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
    fromPCLPointCloud2(mesh->cloud, *cloud);

    // compute surface normal
    pcl::features::computeApproximateNormals(*cloud, mesh->polygons, *normal);

    // print results
    if (printResult){
        cout << "Normal cloud contains "
             << normal->width * normal->height
             << " points" << endl;

         // print the first 10 vertices
         cout << "Vertex normals:" << endl;
         for (size_t i=0; i<10; ++i)
             cout << "    " << normal->points[i] << endl;
    }
}


int main (int argc, char** argv)
{
    // ./main [path/to/ply] (--debug)
    string fname = argv[1];

    // check if debug flag is set
    bool debug = false;
    for(int i=0;i<argc;++i){
        string arg = argv[i];
        if(arg == "--debug")
            debug = true;
    }

    // read file
    PolygonMesh::Ptr mesh (new PolygonMesh);
    readPLY(mesh, fname, debug);

    // calculate normals
    PointCloud<Normal>::Ptr normal (new PointCloud<Normal>);
    computeNormal(mesh, normal, debug);
}

Currently, I have already obtained surface normal for each vertex with pcl::features::computeApproximateNormals. Is there a way to use PCL to project the normals onto an image plane with the xyz-elements of the normal mapped to the RGB channels and save the image to a file?


Solution

  • Welcome to Stack Overflow. What the documentation says is:

    Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface in that point.

    From what I gather from what you say is that you already have the surfaces for which you can easily calculate surface normals. Normal estimation is used because 3D point cloud data is basically a bunch of sample points from the real world. You do not have surface information in this kind of data. What you do is you estimate a surface around a pixel using Planar Fitting(2D Regression). Then, you obtain surface normal. You cannot compare these two methods. They essentially gave different purposes.

    For question two: Yes. Refer to this SO answer.