In a photo app (no video), I have a number of built-in and custom Metal CIFilters chained together in a class like so (I've left out the lines to set filter parameters, other than the input image):
var colorControlsFilter = CIFilter(name: "CIColorControls")!
var highlightShadowFilter = CIFilter(name: "CIHighlightShadowAdjust")!
func filter(image data: Data) -> UIImage
{
var outputImage: CIImage?
let rawFilter = CIFilter(imageData: imageData, options: nil)
outputImage = rawFilter?.outputImage
colorControlsFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = colorControlsFilter.setValue.outputImage
highlightShadowFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = highlightShadowFilter.setValue.outputImage
...
...
if let ciImage = outputImage
{
return renderImage(ciImage: ciImage)
}
}
func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
let size = ciImage.extent.size
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}
Processing takes about a second.
Is this way of linking together output to input of the filters the most efficient? Or more generally: What performance optimisations could I do?
You should use a CIContext
to render the image:
var context = CIContext() // create this once and re-use it for each image
func render(image ciImage: CIImage) -> UIImage? {
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
return cgImage.map(UIImage.init)
}
It's important to create the CIContext
only once since it's expensive to create because it's holding and caching all (Metal) resources needed for rendering the image.