Search code examples
c#image-processingopencvemgucvsurf

Surf Feature detection/matching issues in EMGU 2.4


So in my spare time I like to try and automate various games through computer vision techniques. Normally template matching with filters and pixel detection works fine with me. However, I recently decided to try my hand at navigating through a level by using feature matching. What I intended was to save a filtered image of an entire explored map. FullMap

Then to copy the Minimap from the screen every few seconds and filter it in the same manner and use Surf to match it to my full map which should hopefully give me a players current location (center of the match would be where the player is on the map). A good example of this working as intended is below(full map with found match on left, right is mini map image. GoodMatch

What I am having trouble with is the Surf Matching in the EMGU library seems to find incorrect matches in many cases. BadMatch BadMatch2

Sometimes its not completely bad like below: WonkyMatch

I can kind of see whats happening is that its finding better matches for the keypoints in different locations on the map since Surf is supposed to be scale invariant. I don't know enough about the EMGU library or Surf to limit it so that it only accepts matches like the initial good one and either throws away these bad matches, or to tune it so those wonky matches are good ones instead.

I am using the new 2.4 EMGU code base and my code for the SURF matching is below. I would really like to get it to the point so that it only returns matches that are always the same size(scaled ratio of normal minimap size to what it would be on the full map) so that I don't get some crazy shaped matches.

public Point MinimapMatch(Bitmap Minimap, Bitmap FullMap)
    {
        Image<Gray, Byte> modelImage = new Image<Gray, byte>(Minimap);
        Image<Gray, Byte> observedImage = new Image<Gray, byte>(FullMap);     
        HomographyMatrix homography = null;

        SURFDetector surfCPU = new SURFDetector(100, false);
        VectorOfKeyPoint modelKeyPoints;
        VectorOfKeyPoint observedKeyPoints;
        Matrix<int> indices;

        Matrix<byte> mask;
        int k = 6;
        double uniquenessThreshold = 0.9;
        try
        {
            //extract features from the object image
            modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
            Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);

            // extract features from the observed image
            observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
            Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
            BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
            matcher.Add(modelDescriptors);

            indices = new Matrix<int>(observedDescriptors.Rows, k);
            using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
            {
                matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
                mask = new Matrix<byte>(dist.Rows, 1);
                mask.SetValue(255);
                Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
            }

            int nonZeroCount = CvInvoke.cvCountNonZero(mask);
            if (nonZeroCount >= 4)
            {
                nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
                if (nonZeroCount >= 4)
                    homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
            }

            if (homography != null)
            {  //draw a rectangle along the projected model
                Rectangle rect = modelImage.ROI;
                PointF[] pts = new PointF[] { 
                new PointF(rect.Left, rect.Bottom),
                new PointF(rect.Right, rect.Bottom),
                new PointF(rect.Right, rect.Top),
                new PointF(rect.Left, rect.Top)};
                homography.ProjectPoints(pts);
                Array.ConvertAll<PointF, Point>(pts, Point.Round);

                Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints, indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox.KeypointDrawType.DEFAULT);
                result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);                  

                return new Point(Convert.ToInt32((pts[0].X + pts[1].X) / 2), Convert.ToInt32((pts[0].Y + pts[3].Y) / 2));
            }


        }
        catch (Exception e)
        {
            return new Point(0, 0);
        }


     return new Point(0,0);
  }

Solution

  • You have a particular scenario in which you have black region all around extracted keypoints. When it comes to features matching remember that it occurs between descriptors corresponding to extracted keypoints.

    SURF descriptor describe a patch and not a single keypoints and in your scenario that could be the cause of your poor matching performance.

    [EDIT]

    Analyzing your scenario a possible candidate method is one that consist on a partial contour matching. I don't think that you could find it already implemented inside opencv out of the box, so I can suggest you a good paper " Efficient Partial Shape Matching of Outer Contours" by Donoser that you could grab from citeseerx and quite easily implement.