Search code examples
iosuiimageviewcore-imageface-detection

Translating CIDetector (face detection) results into UIImageView coordinates


I've been struggling to translate the CIDetector (face detection) results into coordinates relative to the UIImageView displaying the image so I can draw the coordinates using CGPaths.

I've looked at all questions here and all the tutorials I could find and most of them use small images that are not scaled when displayed in a UIImageView (example). The problem I am having is with using large images which are scaled using aspectFit when displayed in a UIImageView and determining the correct scale + translation values.

I am getting inconsistent results when testing with images of different sizes/aspect ratios, so I think my routine is flawed. I'd been struggling with this for a while so if anyone has some tips or can x-ray what I am doing wrong, that would be a great help.

What I am doing:

  • get the face coordinates
  • use the frameForImage routine below (found here on SO) to get the scale and bounds of the UIImageView image
  • create transform for scale + translation
  • apply transform to the CIDetector result

// my routine for determining transform values

NSDictionary* data = [self frameForImage:self.imageView.image inImageViewAspectFit:self.imageView];

CGRect scaledImageBounds = CGRectFromString([data objectForKey:@"bounds"]);
float scale = [[data objectForKey:@"scale"] floatValue];

CGAffineTransform transform = CGAffineTransformMakeScale(scale, -scale);

transform = CGAffineTransformTranslate(transform, 
          scaledImageBounds.origin.x / scale, 
          -(scaledImageBounds.origin.y / scale + scaledImageBounds.size.height / scale));

CIDetector results transformed using:

     mouthPosition = CGPointApplyAffineTransform(mouthPosition, transform);

// example of bad result: scale seems incorrect

enter image description here

// routine below found here on SO for determining bound for image scaled in UIImageView using 'aspectFit`

-(NSDictionary*)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)myImageView
{
    float imageRatio = image.size.width / image.size.height;
    float viewRatio = myImageView.frame.size.width / myImageView.frame.size.height;

    float scale;
    CGRect boundingRect;
    if(imageRatio < viewRatio)
    {
        scale = myImageView.frame.size.height / image.size.height;
        float width = scale * image.size.width;
        float topLeftX = (myImageView.frame.size.width - width) * 0.5;
        boundingRect = CGRectMake(topLeftX, 0, width, myImageView.frame.size.height);
    }
    else
    {
        scale = myImageView.frame.size.width / image.size.width;
        float height = scale * image.size.height;
        float topLeftY = (myImageView.frame.size.height - height) * 0.5;
        boundingRect = CGRectMake(0, topLeftY, myImageView.frame.size.width, height);
    }

    NSDictionary * data = [NSDictionary dictionaryWithObjectsAndKeys:
                           [NSNumber numberWithFloat:scale], @"scale",
                           NSStringFromCGRect(boundingRect), @"bounds",
                           nil];

    return data;
}

Solution

  • I completely understand what you are trying to do, but let me offer you a different way to achieve what you want.

    • you have an over sized image
    • you know the size of the imageView
    • ask the image for its CGImage
    • determine a 'scale' factor, which is the imageView width divided by the image width
    • multiple this value and your image height, then subtract the result from the imageViewHeight, to get the "empty" height in the imageView, lets call this 'fillHeight'
    • divide 'fillHeight' by 2 and round to get the 'offset' value used below
    • using context provided by UIGraphicsBeginImageContextWithOptions(imageView.size, NO, 0), paint the background whatever color you want, then draw your CGImage

      CGContextDrawImage (context, CGRectMake(0, offset, imageView.size.width, rintf( image.size.height*scale )), [image CGImage]);

    • get this new image using:

      UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image;

    • set the image: imageView.image = image;

    Now you can exactly map back to your image as you know the EXACT scaling ratio and offsets.