Search code examples
opencvobject-detectionsurfhomography

opencv surf Object detection Homography bounding box


Why are we adding Point2f( img_object.cols, 0) to every point in the scene_corners

perspectiveTransform( obj_corners, scene_corners, H);
// Draw lines between the corners (the mapped object in the scene -image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0),  scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );`

On following the code of surf by Opencv, the bounding polygon is not as expected:

What are the limits of orientation and distance of Object in the scene for effective recognition?

here is the output image


Solution

  • For your first question: Since img_object is padded to the left of the scene image, the pixel with (x, y) at the scene image becomes (x+img_object.cols, y). Therefore you would need to add that offset to create the proper bounding box.

    For your second question: It is hard to say something about the limitations of orientation and distance of the algorithm. It depends on a lot of things: object features, image resolution, image quality, etc.

    In your case, you mentioned that the bounding polygon is not as expected. But what is your expected bounding polygon? One thing that I have noticed is that your object image is not entirely flat. If the object image is viewed from an angle, it is natural for the generated homography to look like that. (If the object image is flat, I think the generated bounding polygon should have boundaries parallel to the edge of that notebook)