I am developing an application that:
I also read that ICP is used to correct any pose error, but using PPF without ICP gives acceptable results. Anyway I tried to use ICP but it always gave me the "Bad argument error".
The code I used is written bellow:
void computer_vision_3d(string in_path)
{
Mat files_clouds[NUM_OF_FILES]; // > Stores the point cloud of all objects
Mat scene_cloud; // > Stores the scene point cloud
ppf_match_3d::PPF3DDetector
detector(RELATIVE_SAMPLING_STEP, RELATIVE_DISTANCE_STEP); // > Matches the model with the scene
vector<Pose3DPtr> results; // > Stores the results of the processing
// ! Phase 1 - Train Model
scene_cloud = loadPLYSimple(DEFAULT_SCENE_PATH.c_str(), PARAM_NORMALS);
for(int i = 0; i < NUM_OF_FILES; i++)
{
// . Init Point Cloud
string file_path = DEFAULT_OBJECT_PATH + to_string(i) + ".ply";
files_clouds[i] = loadPLYSimple(file_path.c_str(), PARAM_NORMALS);
// . Train Model
detector.trainModel(files_clouds[i]);
}
// ! Phase 2 - Detect from scene
detector.match( scene_cloud, results,
RELATIVE_SCENE_SAMPLE_STEP, RELATIVE_SCENE_DISTANCE);
// ! Phase 3 - Results
if(results.size() > 0)
{
Pose3DPtr result = results[0];
result->printPose();
// ! Transforms the point cloud to the model pose
for(int i = 0; i < NUM_OF_FILES; i++)
{
Mat pct = transformPCPose(files_clouds[i], result->pose);
string f_name = "match" + to_string(i) + ".ply";
writePLY(pct, f_name.c_str());
}
}
}
One of the models, the scene and the result:
Figure 1 - One of the seven models.
Figure 2 - The scene.
Figure 3 - The weird result.
As the author of that module, I would like to address your questions:
1. The detector.match() stores on results the poses of the model on the scene. But as far as I understand, a pose is the position and the orientation of the model, but how will I know which model is?
There is only a single model. So poses are for different hypotheses of the same model
2. When I print the pose of the first result, it gives me a 4x4 table with float values on it. Where can I find what do they mean?
It is an augmented matrix of [R|t] with the extra row of [0,0,0,1] to homogenize.
3. Still on pose printing, it gives me the Model Index which, at first, I thought that was the number of the model I used to train the detector. The problem is: I used 7 models to train the detector and the first result gives me "Pose to Model Index 12". So I thought it was the Model Description Index as it is on Drost(2012). But if it really is the Model Description Index, how can I know to which Model this index belongs?
It is the ID of the matching model point (correspondence) not the model id. As I said, multiple models are not supported.
3. According to the tutorial, using transformPCPose and writing it to a PLY file would give a visual result of the matching, but the documentation says that it returns a 4x4 pose matrix, but I am still printing it and it gives me a weird image with more than 16 vertices, so I didn't understand what the tutorial was doing. How can I write the visual result on a file like the tutorial did?
The function transforms a point cloud with a given pose. It will only give correct results if your pose is correct. I don't think that the pose outcome of your implementation is correct. The "Bad argument" exception in the ICP is also probably because of that.
And one more note: Always make sure that the model and scene have the surface normals, which are correctly oriented towards the camera.