Search code examples
ioscocoa-touchopengl-esios7gpuimage

Where and when to use prepareForImageCapture in a long filter chain


I have several filters -- all stacked -- to process images, and I use sliders to modify the settings of those filters. I'm running into some memory constraints and am looking at using prepareForImageCapture to improve memory and performance but am wondering where/when to apply it? This is strictly for iOS 7.

  1. Create a GPUImagePicture from a UIImage
  2. Create a GPUImageFilter and add a target from the GPUImagePicture to the GPUImageFilter
  3. Create X more GPUImageFilters, linking them all via addTarget:
  4. Create a GPUImageView and target the last GPUImageFilter at it
  5. Process the GPUImagePicture

Where in there should I call prepareForImageCapture? Should I call it on every GPUImageFilter and if so, when and in what order?


Solution

  • When used on a filter, -prepareForImageCapture sets up the render target texture of that filter to be accessed via a texture cache.

    What this means is that when you call this on a filter, the next time you use -imageFromCurrentlyProcessedOutput or one of photo capture methods, the UIImage you get back from that filter will have within it a memory-mapped representation of the filter's internal texture. This cuts in half the memory required at that point in time, because only one copy of the image bytes exist in memory, rather than separate copies for the UIImage and filter's backing texture.

    However, because the memory mapping is now shared between the UIImage and the GPUImageFilter, the filter becomes locked at the last image it rendered until the extracted UIImage is deallocated. If the filter was re-rendered for a new image, the result would overwrite the bytes within the UIImage. This is why this isn't on by default for all filters, because people could get confused about this behavior.

    For your example where you're displaying to the screen, you wouldn't need to set this for any of those filters. It only affects filters from which you are extracting a still image. A better approach would be to use -forceProcessingAtSize: on the first filter in your chain and use the dimensions of your target view in pixels as the size to force to. There's no sense in rendering extra pixels that you won't see in your final view, and that will save on the memory required to represent this image internally. Force the size to 0,0 before you capture your final processed image to disk, though, to return these filters to their full size.

    I'm working on a longer term improvement for the chaining of filters involving caching the framebuffers used for filters so they can be reused. This would let you create arbitrarily long filter chains with no additional memory penalty. Can't promise when I'll have this working, though.