Search code examples
iosswiftgpuimage

Chaining filters with GPUImage in swift


(First post to Stack Overflow - So, Hi 'waves nervously')

I'm using the GPUImage library with some success - and have managed to get a simple filter working on a static image using swift.

However I'm having problems trying to chain multiple filters together. The included examples in the library don't seem to cover this. There are plenty of objective C examples but not swift.

Can anyone please give an example of how to apply:

2 blend filters plus a brightness, contrast and saturation filter to a single static image?

I think this is sufficiently complex to cover most uses of the library in Swift. Thanks.


Solution

  • Allocating and chaining filters in Swift is the same as it is in Objective-C, it's just syntactic conversion. For example, the following is how you'd chain two still image inputs to a blend filter in Objective-C, then have the result of that blend be directed to a contrast filter, with a capture of your final image:

    GPUImageOverlayBlendFilter *blendFilter = [[GPUImageOverlayBlendFilter alloc] init];
    [stillImageSource1 addTarget:blendFilter];
    [stillImageSource2 addTarget:blendFilter];
    
    GPUImageContrastFilter *contrastFilter = [[GPUImageContrastFilter alloc] init];
    [blendFilter addTarget:contrastFilter];
    
    [contrastFilter useNextFrameForImageCapture];
    [stillImageSource1 processImage];
    [stillImageSource2 processImage];
    
    UIImage *currentFilteredImage = [contrastFilter imageFromCurrentFramebuffer];
    

    This is the equivalent in Swift:

    let blendFilter = GPUImageOverlayBlendFilter()
    stillImageSource1.addTarget(blendFilter)
    stillImageSource2.addTarget(blendFilter)
    
    let contrastFilter = GPUImageContrastFilter()
    blendFilter.addTarget(contrastFilter)
    
    contrastFilter.useNextFrameForImageCapture()
    stillImageSource1.processImage()
    stillImageSource2.processImage()
    
    let currentFilteredImage = contrastFilter.imageFromCurrentFramebuffer()
    

    As you can see, it's all syntax, nothing different in how you actually call things. You can use the Objective-C example code as a basis for what you want to do, and just rewrite that in your Swift equivalents. The Swift examples that I ship with the framework are either really simple (the tiny application that uses a single filter on live video) or fairly complex (my test case application that executes every filter and operation in the framework).