Search code examples
iosswiftopenglmetal

Confusion About CIContext, OpenGL and Metal (SWIFT). Does CIContext use CPU or GPU by default?


So I'm making an app where some of the main features revolve around applying CIFilters to images.

let context = CIContext()
let context = CIContext(eaglContext: EAGLContext(api: .openGLES3)!)
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)

All of these give me about the same CPU usage (70%) on my CameraViewController where I apply filters to frames and update the imageview. All of these seem to work the exact same way which makes me think I am missing some vital piece of information.

For example, using AVFoundation I get each frame from the camera apply the filters and update the imageview with the new image.

let context = CIContext()

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    connection.videoOrientation = orientation
    connection.isVideoMirrored = !cameraModeIsBack
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)

    let sharpenFilter = CIFilter(name: "CISharpenLuminance")
    let saturationFilter = CIFilter(name: "CIColorControls")
    let contrastFilter = CIFilter(name: "CIColorControls")
    let pixellateFilter = CIFilter(name: "CIPixellate")

    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    var cameraImage = CIImage(cvImageBuffer: pixelBuffer!)

    saturationFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    saturationFilter?.setValue(saturationValue, forKey: "inputSaturation")
    var cgImage = context.createCGImage((saturationFilter?.outputImage!)!, from: cameraImage.extent)!
    cameraImage = CIImage(cgImage: cgImage)

    sharpenFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    sharpenFilter?.setValue(sharpnessValue, forKey: kCIInputSharpnessKey)
    cgImage = context.createCGImage((sharpenFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    contrastFilter?.setValue(cameraImage, forKey: "inputImage")
    contrastFilter?.setValue(contrastValue, forKey: "inputContrast")
    cgImage = context.createCGImage((contrastFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    pixellateFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(pixelateValue, forKey: kCIInputScaleKey)
    cgImage = context.createCGImage((pixellateFilter?.outputImage!)!, from: (cameraImage.extent))!
    applyChanges(image: cgImage)

}

Another example is how I apply changes just to a normal image (I use sliders for all of this)

   func imagePixelate(sliderValue: CGFloat){
    let cgImg = image?.cgImage
    let ciImg = CIImage(cgImage: cgImg!)
    let pixellateFilter = CIFilter(name: "CIPixellate")
    pixellateFilter?.setValue(ciImg, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(sliderValue, forKey: kCIInputScaleKey)
    let outputCIImg = pixellateFilter?.outputImage!
    let outputCGImg = context.createCGImage(outputCIImg!, from: (outputCIImg?.extent)!)
    let outputUIImg = UIImage(cgImage:outputCGImg!, scale:(originalImage?.scale)!, orientation: originalOrientation!)
    imageSource[0] = ImageSource(image: outputUIImg)
    slideshow.setImageInputs(imageSource)
    currentFilteredImage = outputUIImg
}

So pretty much:

  1. Create CgImg from UiImg
  2. Create CiImg from CgImg
  3. Use context to apply filter and translate back to UiImg
  4. Update whatever view with new UiImg

This runs well on my iPhone X and surprisingly well on my iPhone 6 as well. Since my app is pretty much complete I'm looking to optimize it as much as possible. I've looked through a lot of documentation on using OpenGL and Metal to do stuff as well but can't seem to figure out how to start.

I always thought I was running these processes on the CPU but creating the context with OpenGL and Metal provided no improvement. Do I need to be using a MetalKit view or GLKit view (eaglContext seems to be completely deprecated)? How do I translate this over? The apple documentation seems to be lacklustre.


Solution

  • I started making this a comment, but I think since WWDC'18 this works best as an answer. I'll edit as others more an expert than I comment, and am willing to delete the entire answer if that's the proper thing to do.

    You are on the right track - utilize the GPU when you can and it's a good fit. CoreImage and Metal, while "low-level" technologies that "usually" use the GPU, can use the CPU if that is desired. CoreGraphics? It renders things using the CPU.

    Images. A UIImage and a CGImage are actual images. A CIImage however, isn't. The best way to think of it is a "recipe" for an image.

    I typically - for now, I'll explain in a moment - stick to CoreImage, CIFilters, CIImages, and GLKViews when working with filters. Using a GLKView against a CIImage means using OpenGL and a single CIContext and EAGLContext. It offers almost as good performance as using MetalKit or MTKViews.

    As for using UIKit and it's UIImage and UIImageView, I only do when needed - saving/sharing/uploading, whatever. Stick to the GPU until then.

    ....

    Here's where it starts getting complicated.

    Metal is an Apple proprietary API. Since they own the hardware - including the CPU and GPU - they've optimized it for them. It's "pipeline" is somewhat different than OpenGL. Nothing major, just different.

    Until WWDC'18, using GLKit, including GLKView, was fine. But all things OpenGL were depricated, and Apple is moving things to Metal. While the performance gain (for now) isn't that great, you may be best off for something new to use MTKView, Metal, and CIContext`.

    Look at the answer @matt gave here for a nice way to use MTKViews.