I have a UIImage extension that can change the color of it's image that I pulled off somewhere. The problem is that it downgrades it's resolution after it colors the image. I've seen other answers based on this, but I'm not sure how to adapt this to rendering a retina image in this instance:
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskImage = cgImage!
let width = size.width
let height = size.height
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
context.clip(to: bounds, mask: maskImage)
context.setFillColor(color.cgColor)
context.fill(bounds)
if let cgImage = context.makeImage() {
let coloredImage = UIImage(cgImage: cgImage)
return coloredImage
} else {
return nil
}
}
}
I've seen people using UIGraphicsBeginImageContextWithOptions and setting it's scale to the main screen, but I don't think it works if I'm using the CGContext function.
I think you want:
let width = size.width * scale
let height = size.height * scale
and:
let coloredImage = UIImage(cgImage: cgImage, scale:scale, orientation:.up)
(You may need to use imageOrientation
instead of .up
.)