Search code examples
iphoneiosipadcore-graphicsquartz-graphics

CGContextDrawImage + CGContextClipToMask performance


I am using CGContextDrawImage and am not happy with the performance.

Here is my situation:

I have a CGImage that was created from a CGBitmapContext with kCGBitmapByteOrder32Big and kCGImageAlphaPremultipliedLast, and contains RGBA pixels. Size is 64x64 pixels.

I have another CGBitmapContext created with kCGBitmapByteOrder32Big and kCGImageAlphaPremultipliedLast, but this bitmap context is much larger, in my case 1200x1600 pixels.

I am trying to stamp the image along the path drawn by a user (using CGContextDrawImage with blend mode normal). Each time before I stamp, I call CGContextClipToMask to ensure that the next time I draw, the image is clipped to a certain mask (chosen by the user, for example it could be a circle, a star, a triangle, etc.). I can't seem to stamp more than 5 to 10 times per second before the entire user interface becomes totally unresponsive on my iPhone 4S. Unless I drag my finger really slow, rendering slows down terribly.

I can increase the space between the stamps, but things look bad if I use a spacing much higher than 1 or 2 pixels, so I want to keep the spacing between each stamp to about 1 pixel.

I've already profiled my app and 90% of the CPU time is spent down in Apples code from the CGContextDrawImage call. I've read all the Apple documentation about Quartz2D performance as well.

Is there anything else I could try using Core Graphics / Quartz2D to make this faster? I would prefer not to drop down to OpenGL if possible, but there may be no other option...

If I comment out the line that applies the mask, performance is decent.

Oh, and due to company policy, I cannot post code, sorry...


Solution

  • This was caused by stacking masks on the same context. By saving / restoring state between each mask performance is much better.