What's the real difference between:
[GPUImageFilter imageFromCurrentlyProcessedOutputWithOrientation]
and [GPUImageFilter imageByFilteringImage:]
... aside from imageByFilteringImage requiring an NSImage *?
Is there any difference in speed? Does using imageByFilterImage allow you to set up filters as a 'pipeline' which you can re-use or something like that? Or is it, that you need to use imageFromCurrentlyProcessedOutputWithOrientation to work with BlendFilters (which take multiple image inputs)?
My understanding is that if you use imageFromCurrentlyProcessedOutputWithOrientation, you need to create a new filter every time you want to process a new image. Is this correct?
One of the nice things about an open source library is that we can look into the code behind these methods. It's a little more complex than it used to be, because it's been extended to cover the general case of CGImageRefs being used as a basis, but here's the core of -imageByFilteringImage:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithCGImage:imageToFilter];
[self prepareForImageCapture];
[stillImageSource addTarget:(id<GPUImageInput>)self];
[stillImageSource processImage];
CGImageRef processedImage = [self newCGImageFromCurrentlyProcessedOutputWithOrientation:orientation];
[stillImageSource removeTarget:(id<GPUImageInput>)self];
return processedImage;
-imageByFilteringImage:
effectively uses -imageFromCurrentlyProcessedOutputWithOrientation:
(in reality, its CGImage variant) within it.
What it does is it takes in your UIImage or NSImage (iOS or Mac), creates a temporary GPUImagePicture instance from it, builds a filter chain from that to your current filter, sets it up for faster image capture using -prepareForImageCapture
, processes the image, and finally extracts the result via -newCGImageFromCurrentlyProcessedOutputWithOrientation:
.
As you can see, this is little more than a convenience method. In fact, it can have adverse performance consequences if used repeatedly, due to the overhead of creating a new GPUImagePicture instance (which has to upload the UIImage as a texture on creation) every time.
If you want simple, one-off processing, -imageByFilteringImage:
is fine. Generally, though, you'll want to create your own filter chain to do anything more complex than a single filter, if you want to process the same image in multiple ways, or if you want to do any kind of blending or live preview of effects.
-prepareForImageCapture
also has some side effects. While it greatly reduces memory usage and improves image extraction speed by creating a memory map between the filter's output texture and a local pixel buffer, that mapping locks the filter into not being able to process anything else until the extracted UIImage is freed. If you create a manual filter chain, you can decide to not use this call.