I'm trying to take some huge 32bit PNGs that are actually just black with an alpha channel and present them in an iOS app in a memory-friendly way.
To do that I've tried to re-render the images in an "alpha-only" CGContext:
extension UIImage {
func toLayer() -> CALayer? {
let cgImage = self.cgImage!
let height = Int(self.size.height)
let width = Int(self.size.width)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue)!
context.draw(cgImage, in: CGRect(origin: .zero, size: self.size))
let image = context.makeImage()!
let layer = CALayer()
layer.contents = image
layer.contentsScale = self.scale
return layer
}
}
This is awesome! It takes memory usage down from 180MB to about 18MB, which is actually better than I expected.
The issue is, the black (or, now, opaque) parts of the image are no longer black, but are white instead.
It seems like it should be an easy fix to change the coloration of the opaque bits but I can't find any information about it online. Do you have an idea?
I've managed to answer my own question. By setting the alpha-only image as the contents
of the output layer's mask
, we can set the background colour of the layer to anything we want (including non-greyscale values), and still keep the memory benefits!
I've included the final code because I'm surely not the only one interested in this method:
extension UIImage {
func to8BitLayer(color: UIColor = .black) -> CALayer? {
guard let cgImage = self.cgImage else { return nil }
let height = Int(self.size.height * scale)
let width = Int(self.size.width * scale)
let colorSpace = CGColorSpaceCreateDeviceGray()
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue) else {
print("Couldn't create CGContext")
return nil
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let image = context.makeImage() else {
print("Couldn't create image from context")
return nil
}
// Note that self.size corresponds to the non-scaled (retina) dimensions, so is not the same size as the context
let frame = CGRect(origin: .zero, size: self.size)
let mask = CALayer()
mask.contents = image
mask.contentsScale = scale
mask.frame = frame
let layer = CALayer()
layer.backgroundColor = color.cgColor
layer.mask = mask
layer.contentsScale = scale
layer.frame = frame
return layer
}
}