I am working on a image matching project. First we use SURF to find out a matching pair of pictures. There will be at least one object that appears on both images. Now I need to find out, for this same object, what are the sizes of it on the two pictures? Relative size is enough.
Both SIFT and SURF are only local feature point descriptors. I will get a bunch of descriptors and associated feature point locations, but how to use this information to determine an object size? I am thinking of using contours: if I can associate a contour in the two images correctly, then I can find out the object sizes easily by calculating contour point locations. But how to associate contours?
I assume there must be some way to apply SIFT or SURF to get object information, since people can do object tracking using SIFT... but after searched for a long time I still couldn't get any useful information...
Any help would be appreciated! Thanks in advance!
The SIFT/SURF detector gives each feature a canonical scale. By simply comparing the ratios of the scales of the matching features in each image, you should be able to determine their relative size.
You should be already comparing the scales of the potential matches in order to discard spurious matches when there is transformation inconsistency.