Search code examples
iphoneiosipadmaskquartz-2d

How does CGContextClipToMask work internally?


I am trying to replicate the behavior of CGContextClipToMask on iOS without any luck so far. Does anyone know how CGContextClipToMask works internally? I have read the documentation and it says it simply multiplies the image alpha value by the mask alpha value, so that is what I am doing in my custom function, however when I draw the result image using normal blending on to a CGContext multiple times, the result gets darker and darker, whereas with CGContextClipToMask the result is correct and does not get darker.

My theory is that CGContextClipToMask somehow uses the destination context, in addition to the image and mask to produce a correct result, but I just don't know enough about core graphics to be sure.

I've also read this question:

How to get the real RGBA or ARGB color values without premultiplied alpha?

and it's possible I am running into this problem, but then how does CGContextClipToMask get around the problem of precision loss with 8 bit alpha?


Solution

  • I found the problem. When multiplying the mask, I had to do a ceilf call on the RGB values like so:

    float alphaPercent = (float)maskAlpha / 255.0f;
    pixelDest->A = maskAlpha;
    pixelDest->R = ceilf((float)pixelDest->R * alphaPercent);
    pixelDest->G = ceilf((float)pixelDest->G * alphaPercent);
    pixelDest->B = ceilf((float)pixelDest->B * alphaPercent);
    

    Amazingly, this solves the problem...