Search code examples
iosswiftconvolutionedgescifilter

Vertical edge detection with convolution giving transparent image as result with Swift


I am currently trying to write a function which takes an image and applies a 3x3 Matrix to filter the vertical edges. For that I am using CoreImage's CIConvolution3X3 and passing the matrix used to detect vertical edges in Sobels edge detection.

Here's the code:

func verticalEdgeFilter() -> UIImage {
    let inputUIImage = UIImage(named: imageName)!
    let inputCIImage = CIImage(image: inputUIImage)
    let context = CIContext()
    let weights: [CGFloat] = [1.0, 0.0, -1.0, 
                              2.0, 0.0, -2.0, 
                              1.0, 0.0, -1.0]
        
    let verticalFilter = CIFilter.convolution3X3()
    verticalFilter.inputImage = inputCIImage  
    verticalFilter.weights = CIVector(values: weights, count: 9)
        
    if let output = verticalFilter.outputImage{
        if let cgimg = context.createCGImage(output, from: output.extent) {
            let processedImage = UIImage(cgImage: cgimg)
            return processedImage
        }
    }
        
    print("returning original")
    return inputUIImage
}

Now as a result I always get an almost fully transparent image with a 2 Pixel border like this one:

Original

Screenshot of the result (border on the left side)

Am I missing something obvious because the images are only transparent if the center value of the matrix is 0. But if I try the same kernel on some webpage, it does at least lead to a usable result. Setting a bias also just crashes the whole thing which I don't understand.

I also checked Apples documentation on this, as well as the CIFilter web page but I'm not getting anywhere, so I would really appreciate it if someone could help me with this or tell me an alternative way of doing this in Swift :)


Solution

  • Applying this convolution matrix to a fully opaque image will inevitably produce a fully transparent output. This is because the total sum of kernel values is 0, so after multiplying the 9 neighboring pixels and summing them up you will get 0 in the alpha component of the result. There are two ways to deal with it:

    1. Make output opaque by using settingAlphaOne(in:) CIImage helper method.
    2. Use CIConvolutionRGB3X3 filter that leaves the alpha component alone and applies the kernel to RGB components only.

    As far as the 2 pixels border, it's also expected because when the kernel is applied to the pixels at the border it still samples all 9 pixels, and some of them happen to fall outside the image boundary (exactly 2 pixels away from the border on each side). These non existent pixels contribute as transparent black pixels 0x000000.

    To get rid of the border:

    1. Clamp image to extent to produce infinite image where the border pixels are repeated to infinity away from the border. You can either use CIClamp filter or the CIImage helper function clampedToExtent()
    2. Apply the convolution filter
    3. Crop resulting image to the input image extent. You can use cropped(to:) CIImage helper function for it.

    With these changes here is how your code could look like.

    func verticalEdgeFilter() -> UIImage {
        let inputUIImage = UIImage(named: imageName)!
        let inputCIImage = CIImage(image: inputUIImage)!
        let context = CIContext()
        let weights: [CGFloat] = [1.0, 0.0, -1.0,
                                  2.0, 0.0, -2.0,
                                  1.0, 0.0, -1.0]
    
        let verticalFilter = CIFilter.convolution3X3()
        verticalFilter.inputImage = inputCIImage.clampedToExtent()
        verticalFilter.weights = CIVector(values: weights, count: 9)
    
        if var output = verticalFilter.outputImage{
            output = output
                .cropped(to: inputCIImage.extent)
                .settingAlphaOne(in: inputCIImage.extent)
    
            if let cgimg = context.createCGImage(output, from: output.extent) {
                let processedImage = UIImage(cgImage: cgimg)
                return processedImage
            }
        }
    
        print("returning original")
        return inputUIImage
    }
    

    If you use convolutionRGB3X3 instead of convolution3X3 you don't need to do settingAlphaOne.

    BTW, if you want to play with convolution filters as well as any other CIFilter out of 250, check this app out that I just published: https://apps.apple.com/us/app/filter-magic/id1594986951