Applying a convolution kernel to an input image should produce an output image with the exact same dimensions. Yet, when using a CIFilter.convolution3x3
with a non-zero bias on a CIImage
, inspecting the output reveals that the width, height, and origin coordinate have been skewed into infinity, specifically CGFloat.greatestFiniteMagnitude
. I've tried the 5x5 and 7x7 versions of this filter and I've tried setting different weights and biases and the conclusion is the same - if the bias is anything other than zero the output image's size and origin coordinate appear to be ruined.
The documentation for this filter is here.
Here is some code...
// create the filter
let convolutionFilter = CIFilter.convolution3X3()
convolutionFilter.bias = 1 // any non zero bias will do
// I'll skip setting convolutionFilter.weights because the filter's default weights (an identity matrix) should be fine
// make your CIImage input
let input = CIImage(...) // I'm making mine from data I got from the camera
// lets print the size and position so we can compare it with the output
print(input.extent.width, input.extent.height, input.extent.origin) // -> 960.0 540.0 (0.0, 0.0)
// pass the input through the filter
convolutionFilter.inputImage = input
guard let output = convolutionFilter.outputImage else {
print("the filter failed for some reason")
}
// the output image now contains the instructions necessary to perform the convolution,
// but no processing has actually occurred; even so, the extent property will have
// been updated if a change in size or position was described
// examine the output's size (it's just another CIImage - virtual, not real)
print(output.extent.width, output.extent.height, output.extent.origin) // -> 1.7976931348623157e+308 1.7976931348623157e+308 (-8.988465674311579e+307, -8.988465674311579e+307)
Notice that 1.7976931348623157e+308
is CGFloat.greatestFiniteMagnitude
.
This shouldn't be happening. The only other information I can provide is that I'm running this code on iOS 13.5 and the CIImages I am filtering are being instantiated from CVPixelBuffers grabbed from CMSampleBuffers that are automatically delivered to my code by the device's camera feed. The width and height are 960x540 before going through the filter.
Although it does not appear to be documented anywhere this does seem to be the normal behavior as @matt suggested, although I have no idea why the bias
is the deciding factor. In general I suspect it has something to do with the fact that CIFilter's convolutions must operate outside the initial bounds of the image when processing the edge pixels; the kernel overlaps the edge and the undefined area outside it, which is treated as an infinite space of virtual RGBA(0,0,0,0) pixels.
After the extent is changed to infinity, the original image pixels themselves are still at their original origin point and width/height, so you will have no trouble rendering them into a target pixel buffer with the same origin and width/height; The CIContext
which you use for this rendering will just ignore those "virtual" pixels that are outside the bounds of the target pixel buffer.
Keep in mind that your convolution may have unintended effects at the edges of your image due to the interaction with the virtual RGBA(0,0,0,0) pixels adjacent to them, making you think the rendering has gone wrong or misaligned things. Often if you use your CIImage's clampedToExtent()
method before applying the convolution such problems can be avoided.