I'm trying to process a large (8000×8000) image using Cocoa. When using NSImage, the following code immediately uses up about 1GB of RAM:
var outputImage = NSImage(size: NSMakeSize(8000, 8000))
outputImage.lockFocus()
// drawing operations here
But when using NSBitmapImageRep, only a few hundred MB are used:
var outputRep = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: 8000, pixelsHigh: 8000, bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)
var context = NSGraphicsContext(bitmapImageRep: outputRep!)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(context)
// drawing operations here
If my math is right, an 8000×8000 image should use up about (8000 × 8000 × 4 ÷ 1024 ÷ 1024) = 244MB, which is in line with NSBitmapImageRep's memory usage.
Why does NSImage use 4× as much memory as needed?
Oops! I missed the significance of the 4× memory usage. It turns out that NSImage uses points instead of pixels, meaning that on a Retina device, all NSImages will in fact be using 2× pixel resolution. This means that the actual memory footprint of my 8000×8000 point (16000×16000 pixel) image is 977MB, which is consistent with my results.