I'm trying to write an app that allows for transforming RAW photos from a user's photo library. The use case is to desqueeze photos taken with an anamorphic lens, but the task is simply to scale an image non-proportionally along the horizontal axis.
The code I currently have to accomplish this is:
let sourceImage = CIImage(image: UIImage(named: "rawSource.DNG")!)!
let transformFilter = CIFilter.lanczosScaleTransform()
transformFilter.inputImage = sourceImage
transformFilter.scale = 1
transformFilter.aspectRatio = 1.5
let image = UIImage(ciImage: transformFilter.outputImage!)
This effectively stretches the image horizontally to 150% of its original width. What's not clear to me, however, is what is happening under the hood: is CoreImage actually modifying the original DNG RAW file and inserting additional pixel data into it? Or is it first converting the RAW file to a JPG (or other) format and then applying the transform to that JPG? My goal is to modify the original RAW file but I suspect that it might just be modifying a JPG representation of it.
I see that there's also a CIRawFilter
that appears to be used specifically for modifying the RAW photo data, but it's not clear to me if this is necessary for a simple transform or if it's sufficient to use the above approach to just transform a CIImage
created from the RAW file.
Is there a way to know what's actually being saved to the user's photo library when the edit is committed?
Every time you load a RAW image with UIImage
or CIImage
, the image is "developed" into RGB pixel data.
Even if you use CIRAWFilter
, the outputImage
of that filter will be a 16-bit float (by default) RGBA image because this is Core Image's working format. You can only change the way the RAW is developed with the CIRAWFilter
.
The result is a new image that lives in memory. It will not automatically replace the original RAW image. In order to actually override the original image in the user's library, you need to use the content editing APIs from PhotoKit. And even then, there is no way to store the image as RAW again since it was now already developed in order to do the editing.
I'm not aware of any APIs that can manipulate actual RAW data in the way you describe—certainly not with Core Image.