I'm looking into this method because I would like to convert a rather large NSImage to a smaller CGImage in order to assign it to a CALayer's contents.
From Apple's documentation I get that the proposedRect is supposed to be the size of the CGImage that will be returned and that, if I pass nil
for the proposedRect, I will get a CGImage the size of the original NSImage. (Please correct me, if I'm wrong.)
I tried calling it with nil
for the proposed rect and it works perfectly, but when I try giving it some rectangle like (0,0,400,300)
, the resulting CGImage still is the size of the original image. The bit of code I'm using is as follows.
var r = NSRect(x: 0, y: 0, width: 400, height: 300)
let img = NSImage(contentsOf: url)?.cgImage(forProposedRect: &r, context: nil, hints: nil)
There must be something about this that I understood wrong. I really hope someone can tell me what that is.
This method is not for producing scaled images. The basic idea is that drawing the NSImage
to the input rect in the context would produce a certain result. This method creates a CGImage
such that, if it were drawn to the output rect in that same context, it would produce the same result.
So, it's perfectly valid for the method to return a CGImage
the size of the original image. The scaling would occur when that CGImage
is drawn to the rect.
There's some documentation about this that only exists in the historical release notes from when it was first introduced. Search for "NSImage, CGImage, and CoreGraphics impedance matching".
To produce a scaled-down image, you should create a new image of the size you want, lock focus on it, and draw the original image to it. Or, if you weren't aware, you can just assign your original image as the layer's contents
and see if that's performant enough.