Search code examples
iosmacosswiftcore-graphicsswift-playground

Blend mode kCGBlendModeMultiply is not correctly multiplying an image with a color


Goal: To tint an image by multiplying it with a color.

Assumptions: According to Apple's documentation, the blend mode kCGBlendModeMultiply "multiplies the source image samples with the background image samples." What I understand from this is: R = S * D for R, G, B and A. So if one of the 2 pixels in the multiplication has a RED value of 0.0, the resulting pixel's RED will be 0.0.

Results: It is not doing that when the color has an alpha. If I use a color that is black with an alpha of 0.5, I would expect the result to be black with an alpha of 0.5 for all the pixels where the image had an alpha of 1.0.

Note: It seems to work alright when the color alpha is 1.0. If I use black with an alpha of 1.0, the resulting image is a solid black image, but that's not what I'm asking here.

Expected Results:

  1. The image alpha should be 0.5 because that's the alpha of the color, but you don't see any of the red background color shining through the image, except for the corners because I use the image as a clipping mask.
  2. The image should be completely grey because the color has 0.0 values for R, G and B. When you multiply by 0, the result should be 0.

The code below is a copy/past of what I did in the playground, and the image attached shows the original image, the tint color, the result image in an image view and the image view with a red background (to check transparency) in that order.

import UIKit

// Get the image, its size, blend mode and tint color
var image = UIImage(named: "outlook-checkmark");
var size = image!.size;
var blendMode = kCGBlendModeMultiply;
var tintColor = UIColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.5);

// Create the graphic context, a frame with the image size
// and get the CGImage.
UIGraphicsBeginImageContextWithOptions(size, false, 0.0);
var context : CGContextRef = UIGraphicsGetCurrentContext();
var aRect = CGRectMake(0, 0, size.width, size.height);
var cgImage = image!.CGImage;

// Converting a UIImage to a CGImage flips the image,
// so apply an upside-down translation
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);

CGContextSetBlendMode(context, blendMode);
CGContextDrawImage(context, aRect, cgImage);

// Set the mask to only tint non-transparent pixels
CGContextClipToMask(context, aRect, cgImage);

CGContextSetFillColorWithColor(context, tintColor.CGColor);
CGContextFillRect(context, aRect);

var returnImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

var imageView = UIImageView(frame: CGRectMake(0, 0, 100, 100));
imageView.image = returnImage;
imageView.backgroundColor = UIColor.redColor();

enter image description here


Solution

  • I think your expectations regarding the multiply blend mode and the resulting alpha component are simply incorrect. I would not expect that blending onto an opaque image would make it transparent.

    The blend mode docs refer you to the PDF spec for the behavior of the non-Porter-Duff modes. It's fairly dense, but it looks to me like the result alpha is not subject to the blend mode at all. And if the background is opaque, the result is opaque.

    Section 7.2.7 of the spec has a summary of the compositing computations. For our purposes, we don't care about "shape", so all of the ƒ variables are 1.0 and alpha equals opacity (α = 𝑞). The spec says that α𝑟 = α𝑏𝑠−(α𝑏×α𝑠). In your case, the backdrop alpha (α𝑏) is 1.0, so:

    α𝑟 = α𝑏𝑠−(α𝑏×α𝑠)
    α𝑟 = 1.0+α𝑠−(1.0×α𝑠)
    α𝑟 = 1.0+α𝑠−α𝑠
    α𝑟 = 1.0