I am trying to crop my images in iOS for awhile now. The code I have works well but isn't fast enough. When I supply it with around 20-25 images it takes 7-10 seconds to process it. I have tried every possible way to fix this, but haven't been successful. I am not sure what am I missing.
- (UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(@"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
Your original question was not correctly stating the problem and so it now does: the reason that these operations take so long is the number of CPU cycles needed to scale an image (not crop it, which is much simpler and faster). When scaling, the system needs to use blending of some number of pixels that surround an area, which consume lots of cpu cycles. You can make this go faster by using a combination of techniques, but there is no single answer.
1) Use blocks and dispatch these image operations on a concurrent dispatch queue, to get parallelism. I believe the latest iPad has 4 cores that you can put to use that way. [UIGraphicsBeginImageContext is thread safe].
2) Get the ContextRef pointer, and set the interpolation setting to the lowest setting:
UIGraphicsBeginImageContext(targetSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
...
3) Cheat - don't scale except by powers of two. In this technique, you would determine the "best" power of two to shrink the image by, the expand the width and height to fit your target size. If you can use a power of two, you can use the CGImageRef from the UIImage, get the pixel pointer, and copy every other pixel / every other row, and create a smaller image really quickly (using CGImageCreate). It may not be as high a quality that you would get with letting the system scale your image, but it will be faster. This is obviously a fair amount of code, but you can make the operations really fast this way.
4) Redefine your task. Instead of trying to resize a group of images, change your app so that you only show one or two resized images at a time, and while the user is looking at them, do the other image operations on a background queue. This is for completeness, I assume you already thought of this.
PS: if this works for you, no need for the bounty, help someone else instead.