Search code examples
swiftcgcontextcgimage

CGContext.draw(), CGContext.makeImage() giving me double intended resolution


I've written an extension to render [CGImage]s into a big composite image. The bug I have is that the resulting image is double the intended resolution. Is this a bytesPerRow issue? Or something else?

public extension Array where Element == CGImage {
    func render(cols: Int) -> CGImage? {
        guard count > 0 else { return nil }

        var maxWidth: Int = 0
        var totalHeight: Int = 0

        var currentArrayIndex = 0

        while currentArrayIndex < count {
            var currentRowWidth = 0
            var maxRowHeight = 0
            for _ in 0..<cols {
                currentRowWidth += self[currentArrayIndex].width
                maxRowHeight = max(self[currentArrayIndex].height, maxRowHeight)
                currentArrayIndex += 1
            }
            maxWidth = max(maxWidth, currentRowWidth)
            totalHeight += maxRowHeight
        }

        let size = CGSize(width: maxWidth, height: totalHeight)
        UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
        guard let context = UIGraphicsGetCurrentContext() else { return nil }

        var x: Int = 0
        var y: Int = 0

        var rowMaxHeight = 0
        for image in self {

            context.saveGState()
            context.translateBy(x: 0, y: CGFloat(image.height))
            context.scaleBy(x: 1.0, y: -1.0)
            context.draw(image, in: CGRect(x: x, y: y, width: image.width, height: image.height))
            context.restoreGState()

            rowMaxHeight = max(image.height, rowMaxHeight)
            x += image.width
            if x >= Int(size.width) {
                x = 0
                y -= rowMaxHeight
            }
        }

        let cgImage = context.makeImage()
        UIGraphicsEndImageContext()
        return cgImage
    }

    private func max(_ one: Int, _ two: Int) -> Int {
        if one > two { return one }
        return two
    }
}


Solution

  • UIGraphicsBeginImageContextWithOptions(size, false, 0.0)

    your last parameter scale = 0.0

    If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
    (from Documentation)

    meaning it is set to CGFloat scale = [[UIScreen mainScreen] scale];

    For Retina displays, the scale factor may be 3.0 or 2.0 and one point can represented by nine or four pixels, respectively.

    Suggestion to set

    UIGraphicsBeginImageContextWithOptions(size, false, 1.0);
    

    and check again