Search code examples
iosuiimagecore-graphicscgimage

Pixel data of CGImage is incomplete when scaled with CGContext to small size


I have the following png image of a white circle on a transparent background of size 512x512:

enter image description here

It's impossible to see here on Stack Overflow so here it is again, displayed in a UIImageView with a black background:

enter image description here

I'm trying to scale the image down to different sizes using high quality interpolation and antialiasing, and then read the pixel information and continue to work with that.

The scaling is done using a regular CGContext:

private func scaleImageAntialiased(_ cgImage: CGImage, _ size: CGSize) -> CGImage {

    UIGraphicsBeginImageContext(size)
    guard let context = UIGraphicsGetCurrentContext() else { fatalError() }

    context.interpolationQuality = .high
    context.setShouldAntialias(true)

    context.draw(cgImage, in: CGRect(origin: .zero, size: size))

    guard let image = UIGraphicsGetImageFromCurrentImageContext()?.cgImage else { fatalError() }
    return image
}

Then I read the pixel information using the dataProvider and CFDataGetBytePtr:

var image = UIImage(named: "Round_512")!.cgImage!

let width: size_t = image.width
let height: size_t = image.height

let cfData = image.dataProvider!.data!
let dataPtr = CFDataGetBytePtr(cfData)
let bytesPerPixel = 4
var data: [GLubyte] = Array(UnsafeBufferPointer(start: dataPtr, count: width * height * bytesPerPixel))

Everything works fine until I try to set the new size to 4x4px or lower.

let size = 4
let newSize = CGSize(width: size, height: size)
image = scaleImageAntialiased(image, newSize) 

While the scaled image still shows as you would expect it to inside of the UIImageView

enter image description here

the raw RGBA byte data is missing two rows of information:

[3, 3, 3, 3,   75, 75, 75, 75,   74, 74, 74, 74,   3, 3, 3, 3, 
 0, 0, 0, 0,     0, 0, 0, 0,       0, 0, 0, 0,     0, 0, 0, 0, 
 75, 75, 75, 75, 254, 254, 254, 254, 254, 254, 254, 254, 74, 74, 74, 74, 
 0, 0, 0, 0,     0, 0, 0, 0,       0, 0, 0, 0,     0, 0, 0, 0]

(I formatted the above result of print(data) to make this a little more obvious). As you can see the second and forth row of bytes return 0, instead of having the same values (symmetrically) as the other two rows.

This problem only happens if I first scale the image using a CGContext and then try to read it with the above approach. If I take the scaled image, save it to a file and then read the pixel information from that 4x4 file (using the same approach) I get the expected result.

So the scaling itself works. The reading of pixels works as well. Only the combination doesn't.

Can anybody explain why that is the case?

For completeness, here's a download link for the 512px image and a full Playground file you can use to reproduce the issue:

import UIKit
import GLKit
import PlaygroundSupport

private func scaleImageAntialiased(_ cgImage: CGImage, _ size: CGSize) -> CGImage {

    UIGraphicsBeginImageContext(size)
    guard let context = UIGraphicsGetCurrentContext() else { fatalError() }

    context.interpolationQuality = .high
    context.setShouldAntialias(true)

    context.draw(cgImage, in: CGRect(origin: .zero, size: size))

    guard let image = UIGraphicsGetImageFromCurrentImageContext()?.cgImage else { fatalError() }
    return image
}

var image = UIImage(named: "Round_512")!.cgImage!

let size = 4
let newSize = CGSize(width: size, height: size)
image = scaleImageAntialiased(image, newSize)

let width: size_t = image.width
let height: size_t = image.height

let cfData = image.dataProvider!.data!
let dataPtr = CFDataGetBytePtr(cfData)
let bytesPerPixel = 4
var data: [GLubyte] = Array(UnsafeBufferPointer(start: dataPtr, count: width * height * bytesPerPixel))

print(data)

let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
imageView.image = UIImage(cgImage: image)
imageView.backgroundColor = .black

PlaygroundPage.current.liveView = imageView

// Save the test image
let path = playgroundSharedDataDirectory.appendingPathComponent("Saved.png")
let pngData = UIImagePNGRepresentation(imageView.image!)
//try! pngData?.write(to: path)

Solution

  • I believe CGImage uses multiples of 32 bytesPerRow... if you have only 4 pixels, each represented by 4 bytes, the data will be "padded out" for each row (the padded data is ignored).

    If you inspect your image:

    print(image.bytesPerRow)
    

    you'll see it prints 32 not 16.

    You want to use:

    let bytesPerRow = image.bytesPerRow
    var data: [GLubyte] = Array(UnsafeBufferPointer(start: dataPtr, count: height * bytesPerRow))
    

    so height * bytesPerRow instead of width * height * bytesPerPixel