Search code examples
iosuipinchgesturerecognizer

Is there a gesture recognizer that handles both pinch and pan together?


So I am working with the iOS 4.2 to add zoom and pan to my application. I have implemented an instance of the UIPinchGestureRecognizer and UIPanGestureRecognizer. It seems to me that only one of these is recognizing a gesture at a time. In particular, the latter only reacts when one finger is down, while the former reacts when the second finger is present. That is okay, but it has some side effects that I think make for inferior quality of user experience.

When you put two fingers down and then move one of them, the image expands (zooms in) like it should, but the pixels under the fingers are no longer under the finger. The image scales from the center of the image, not the mid point between the two fingers. And that center point is itself moving. I want that center point's movement to dictate the panning of the image overall.

Do nearly all iOS applications have this same behavior, where the image zooms in or out around the center of the image rather than the pixels under the fingers tracking the fingers?

It seems to me that creating a custom gesture recognizer is the correct design approach to this problem, but it also seems to me that someone would have created such a recognizer for commercially free download and use. Is there such a UIGestureRecognizer?


Solution

  • So I created the custom gesture recognizer in light of no one giving me a better solution that achieved the desired results. Below are the key code fragments that allow the custom recognizer to indicate where the view should reposition and what its new scale should be with the centroid as the center of the pan and zoom effects so that the pixels under the fingers remain under the fingers at all time, unless the fingers appear to rotate, which is not supported and I can't do anything to stop them from such a gesture. This gesture recognizer pans and zooms simultaneously with two fingers. I need to add support later for one finger panning, even when one of two fingers is lifted up.

    - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
    {
        // We can only process if we have two fingers down...
        if ( FirstFinger == nil || SecondFinger == nil )
            return;
    
        // We do not attempt to determine if the first finger, second finger, or
        // both fingers are the reason for this method call. For this reason, we
        // do not know if either is stale or updated, and thus we cannot rely
        // upon the UITouch's previousLocationInView method. Therefore, we need to
        // cache the latest UITouch's locationInView information each pass.
    
        // Break down the previous finger coordinates...
        float A0x = PreviousFirstFinger.x;
        float A0y = PreviousFirstFinger.y;
        float A1x = PreviousSecondFinger.x;
        float A1y = PreviousSecondFinger.y;
        // Update our cache with the current fingers for next pass through here...
        PreviousFirstFinger = [FirstFinger locationInView:nil];
        PreviousSecondFinger = [SecondFinger locationInView:nil];
        // Break down the current finger coordinates...
        float B0x = PreviousFirstFinger.x;
        float B0y = PreviousFirstFinger.y;
        float B1x = PreviousSecondFinger.x;
        float B1y = PreviousSecondFinger.y;
    
    
        // Calculate the zoom resulting from the two fingers moving toward or away from each other...
        float OldScale = Scale;
        Scale *= sqrt((B0x-B1x)*(B0x-B1x) + (B0y-B1y)*(B0y-B1y))/sqrt((A0x-A1x)*(A0x-A1x) + (A0y-A1y)*(A0y-A1y));
    
        // Calculate the old and new centroids so that we can compare the centroid's movement...
        CGPoint OldCentroid = { (A0x + A1x)/2, (A0y + A1y)/2 };
        CGPoint NewCentroid = { (B0x + B1x)/2, (B0y + B1y)/2 };    
    
        // Calculate the pan values to apply to the view so that the combination of zoom and pan
        // appear to apply to the centroid rather than the center of the view...
        Center.x = NewCentroid.x + (Scale/OldScale)*(self.view.center.x - OldCentroid.x);
        Center.y = NewCentroid.y + (Scale/OldScale)*(self.view.center.y - OldCentroid.y);
    }
    

    The view controller handles the events by assigning the new scale and center to the view in question. I noticed that other gesture recognizers tend to allow the controller to do some of the math, but I tried to do all the math in the recognizer.

    -(void)handlePixelTrack:(PixelTrackGestureRecognizer*)sender
    {
        sender.view.center= sender.Center;
        sender.view.transform = CGAffineTransformMakeScale(sender.Scale, sender.Scale);
    }