Hi guys I'm using opencv3 with the contrib. The question is I want to calculate a sift descriptor at a given pixel (not using the detected keypoints).
I'm trying to build a KeyPoint vector with given pixel. However, to create a KeyPoint I need to know the size information in addition to the pixel location.
KeyPoint (Point2f _pt, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1)
Can anybody tell me what is the size in the constructor? Do I need the angle information in order to compute a sift descriptor? And how can I calculate them with poencv3.
@Utkarsh: I agree to the fact that SIFT descriptor requires orientation and scale information of the Keypoint. Original paper by David G. Lowe (Distinctive Image Features from Scale-Invariant Keypoints) says, "In order to achieve orientation invariance, the coordinates of the descriptor and the gradient orientations are rotated relative to the Keypoint orientation". while the scale information is used for selecting the level of Gaussian blur for the image during the calculation of descriptor.
However, the question is in this post was about calculating a descriptor for a given pixel. Note that given pixel location is not a SIFT Keypoint calculated using the desired procedure. So orientation and scale information is not available in this case. So the code mentioned in previous answer calculate SIFT descriptor at a given pixel at default scale (i.e. 1) and default orientation (without rotating the gradient orientations of the neighbourhood).
@Teng Long: On a separate note, I think that the approach you are using to match Keypoints in the two images (original and rotated one) is somehow ambiguous. You should separately run the SIFT Keypoint detection on both the images and calculate their corresponding descriptors separately. Then, you can use Brute Force matching to these two set of Keypoints.
The following code calculates SIFT Keypoints on an image and its 45 deg rotated version, and then compute the SIFT Keypoint descriptors using Brute Force matching.
# include "opencv2/opencv_modules.hpp"
# include "opencv2/core/core.hpp"
# include "opencv2/features2d/features2d.hpp"
# include "opencv2/highgui/highgui.hpp"
# include "opencv2/nonfree/features2d.hpp"
# include "opencv2\imgproc\imgproc.hpp"
# include <stdio.h>
using namespace cv;
int main( int argc, char** argv )
{
Mat img_1, img_2;
// Load image in grayscale format
img_1 = imread( "scene.jpg", CV_LOAD_IMAGE_GRAYSCALE );
// Rotate the input image without loosing the corners
Point center = Point(img_1.cols / 2, img_1.rows / 2);
double angle = 45, scale = 1;
Mat rot = getRotationMatrix2D(center, angle, scale);
Rect bbox = cv::RotatedRect(center, img_1.size(), angle).boundingRect();
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
warpAffine(img_1, img_2, rot, bbox.size());
// SIFT feature detector
SiftFeatureDetector detector;
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
// Calculate descriptors
SiftDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
// Matching descriptors using Brute Force
BFMatcher matcher(NORM_L2);
std::vector<DMatch> matches;
matcher.match(descriptors_1, descriptors_2, matches);
//-- Quick calculation of max and min distances between Keypoints
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
waitKey(0);
return 0;
}
And here is the result: