Search code examples
iosobjective-cmathtransformationgpuimage

Translate GPUImage position with GPUImageTransformFilter and iOS Pan Gesture?


I'm having an issue using the GPUImage transform filter. I'm using the pan gesture recognizer to reposition the image. The code that I have works, but the image moves at about half speed. If I double my CGAffineTransform newTransform coordinates, the image drags as expected. However, when I start a new panning gesture, the image jumps to a point about twice the distance from center. Perhaps my math is off. Any ideas? Or, can anyone suggest a better solution than what I have here?

- (void)move:(UIPanGestureRecognizer *)sender {
    // Translated CGPoint from GPUImageView
    CGPoint translation = [sender translationInView:self.primaryImageView];
    // Current transform from GPUImageTransformFilter
    CGAffineTransform currentTransform = self.transFilter.affineTransform;
    // Size of GPUImageView bounds for later calculations
    CGSize size = self.primaryImageView.bounds.size;

    if ([sender state] == UIGestureRecognizerStateBegan) {
        // Set a beginning CGPoint 
        // Multiply GPUImageView bounds by current transform to get
        // the translated coordinates in pixels.
        self.beginPoint = CGPointMake(size.width*currentTransform.tx, size.height*currentTransform.ty);
    }

    // Calculate difference from beginning point to translated point
    CGPoint updatedPoint = CGPointMake(self.beginPoint.x+translation.x, self.beginPoint.y+translation.y);

    // Create a new transform translation.
    // Divide updated coordinates by GPUImageView bounds to get
    // a percentage value (-1 to 1)
    CGAffineTransform newTransform = CGAffineTransformMakeTranslation(updatedPoint.x/(size.width), updatedPoint.y/(size.height));

    // Apply new transform to filter and process.
    [self.transFilter setAffineTransform:newTransform];
    [self.sourcePicture processImage];
}

Solution

  • As @BradLarson suggested, I have created a solution using CGAffineTransformTranslate(). I also discovered that the translate calculation also has to factor in the transform scale to accurately translate the position. Here is my solution:

    - (void)move:(UIPanGestureRecognizer *)sender {
        CGPoint translatedPoint = [sender translationInView:self.primaryImageView];
        if ([sender state] == UIGestureRecognizerStateBegan) {
            self.lastPoint = translatedPoint;
        }
    
        CGSize size = self.primaryImageView.bounds.size;
        // Subtract the last point from the translated point to get the difference.
        CGPoint updatedPoint = CGPointMake(translatedPoint.x-self.lastPoint.x, translatedPoint.y-self.lastPoint.y);
        CGAffineTransform currentTransform = self.transFilter.affineTransform;
        // Divide updated point by the bounds to get the transform translate value.
        // Multiply transform value by the result of the offset factor divided
        // by the transform scale value.
        CGAffineTransform newTransform = CGAffineTransformTranslate(currentTransform, (updatedPoint.x/size.width)*(2/currentTransform.a), (updatedPoint.y/size.height)*(2/currentTransform.a));
    
        [self.transFilter setAffineTransform:newTransform];
        [self.sourcePicture processImage];
        self.lastPoint = translatedPoint;
    }
    

    I have set the offset factor to a value of 2. I'm still not sure why this offset is necessary, but I'm guessing it may have to do with the Retina screen. Although, I have not tested this on a non-retina screen device.