Search code examples
iosimageoptimizationretina-displayipad-3

Optimizing Image Drawing for iPad 3


I am trying to find the most optimized way to draw images in iOS on the iPad 3. I am generating a reflection for a third party version of coverflow that I am implementing in my app. The reflection is created using NSOperationQueue and then added via UIImageView in the main thread. Because the coverflow part is already using resources for the animations as you scroll through the images, with each new image that is added, there is a bit of a "pop" in the scrolling and it makes the app feel kind of laggy/glitchy. Testing on iPad 1 and 2 the animation is perfectly smooth and looks great.

How can I further optimize the drawing to avoid this. Any ideas are appreciated. I have been looking into "tiling" the reflection so that it presents a little of the reflection at a time, but I'm not sure what the best approach is.

Here is the drawing code:

   UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:@"3.0-Carousel-Ref-Mask.jpg" ofType:nil]];
    //
    UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:self.name ofType: nil]];
    UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
    CGContextRef ctx = UIGraphicsGetCurrentContext();
    CGContextTranslateCTM(ctx, 0.0, mask.size.height);
    CGContextScaleCTM(ctx, 1.f, -1.f);

    [image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
    UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGImageRef maskRef = mask.CGImage;
    CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                              CGImageGetHeight(maskRef),
                                              CGImageGetBitsPerComponent(maskRef),
                                              CGImageGetBitsPerPixel(maskRef),
                                              CGImageGetBytesPerRow(maskRef),
                                              CGImageGetDataProvider(maskRef), NULL, false);
    CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);

    CGImageRelease(maskCreate);

    UIImage *maskedImage = [UIImage imageWithCGImage:masked scale:[[UIScreen mainScreen]scale] orientation:UIImageOrientationUp];
    CGImageRelease(masked);


    if (maskedImage) {
        [mainView performSelectorOnMainThread:@selector(imageDidLoad:)
                                             withObject:[NSArray arrayWithObjects:maskedImage, endView, nil]
                                          waitUntilDone:YES];
    } else
        NSLog(@"Unable to find sample image: %@", self.name);

The Mask is just a gradient png that I am using to mask the image. Also, if I just draw this offscreen but don't add it, there isn't hardly any lag. The lag comes from actually adding it on the main thread.


Solution

  • So after spending a great deal of time researching this issue and trying out different approaches (and spending a good while with the "Time" profiler in Instruments), I found that the lag was from the image decoding on the main thread when the image was displayed. By decoding on the background with all CoreGraphics calls I was able to cut the time in half. This still wasn't good enough.

    I further found that the reflection being created in my code was taking a long time to display due to the transparency or alpha pixels. I therefor drew it in a context and filled the context with solid black. And then I made the view itself transparent instead of the image. This reduced the time it took on the main thread by 83%—Mission Accomplished