Search code examples
swiftimage-manipulation

Invert black and whites while preserving colors


I have this PDF that I'm converting to a UIImage and rendering in Image:

https://static.avalanche.report/bulletins/2023-05-01/2023-04-30_15-00-00/EUREGIO_7aafd765-0541-4abf-92f0-efc18d6efbf7.pdf

I need to make this image suitable for the dark color scheme of my app, so I would like the actual map to become black/dark-gray, but keep the colors intact.

If I overlay two images in a ZStack and play with blend modes and color invert I can get very close to the desired result:

  ZStack {
    Image(uiImage: self.mapImage!)
        .resizable()
        .scaledToFill()
        .colorInvert()
        .saturation(0)
        .brightness(0.2)
    Image(uiImage: self.mapImage!)
        .resizable()
        .scaledToFill()
        .blendMode(.color)
}

enter image description here

But as you can see the yellow becomes a dark green.

While playing with color adjustments and blend modes in Affinity Design I found a combination that produces a quite good result, except for some text that becomes hard to read:

  • layer 1: "Darken" blend mode
  • layer 2: invert + black & white + "Add" blend mode
  • layer 3: invert + black & white

enter image description here

My problem is that I can't find an "Add" blend mode in Swift, there's plusDarken and plusLighten but they do completely different things.

How can I achieve the result with Swift/SwiftUI?


Solution

  • What you're describing is to invert colors below some saturation threshold, while preserving all other colors. The nicest way to achieve this is with a small Metal .colorEffect shader. The code is simple and extremely flexible to tune as you need.

    Here's the shader:

    #include <metal_stdlib>
    using namespace metal;
    
    // From Metal Shading Language Specification section 7.7.7
    // https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf#//apple_ref/doc/uid/TP40014364
    METAL_FUNC half srgb_to_linear(half c) {
        return (c <= 0.04045) ? c / 12.92 : powr((c + 0.055) / 1.055, 2.4);
    }
    METAL_FUNC half linear_to_srgb(half c) {
        if (isnan(c)) { return 0.0; }
        if (c > 1.0) { return 1.0; }
        return (c < 0.0031308f) ? (12.92f * c) : (1.055 * powr(c, half(1./2.4)) - 0.055);
    }
    
    // This is the actual shader. It accepts a threshold between 0 and 1.
    // Typical values will be less than 0.5.
    [[ stitchable ]] half4 invertWithThreshold(float2 position, half4 currentColor,
                                               float threshold) {
    
        // Determine the saturation by comparing the range of colors.
        // If the most intense channel is very close to the least intense channel,
        // then this color is close to gray.
        half minColor = min(currentColor.r, min(currentColor.g, currentColor.b));
        half maxColor = max(currentColor.r, max(currentColor.g, currentColor.b));
    
        half saturation = maxColor == 0 ? 0 : (maxColor - minColor) / maxColor;
    
        if (saturation < threshold) {
            // Invert the color, setting alpha to 1
            // Handling transparency requires first unmultiplying the colors. Rather than mess with that,
            // this code assumes that everything is opaque.
            return half4(linear_to_srgb(1 - srgb_to_linear(currentColor.r)),
                         linear_to_srgb(1 - srgb_to_linear(currentColor.g)),
                         linear_to_srgb(1 - srgb_to_linear(currentColor.b)),
                         1);
        } else {
            return currentColor;
        }
    }
    

    You can drop this into an Xcode project, and it'll automatically become available.

    To use it, apply the .colorEffect modifier:

    Image(uiImage: self.mapImage)
        .resizable()
        .scaledToFit()
        .colorEffect(ShaderLibrary.invertWithThreshold(.float(0.1)))
    

    The result will be:

    Inverted map, preserving saturated colors.

    This gets a bit messy in the "T" in "Tn". I believe that's mostly due to interpolation. Zooming the image up causes a bit of dithering, which is very bad for this algorithm. This should work much better if you can avoid scaling the image.

    If you need to scale the image, and still keep crisp lines, you can explore a .layerEffect. These can sample nearby pixels, which you could use to correct for dithering. For example, if most of the pixels surrounding this one are almost white, then this pixel should be white.