I've been trying to apply the GPUImageHoughTransformLineDetector
to a UIImage
in Swift but it's giving me nothing and for a couple of hours now i can't figure what i'm doing wrong.
Heres my code :
func lineDetection(image: UIImage)-> UIImage {
let stillImage = GPUImagePicture(image: image)
let filter = GPUImageHoughTransformLineDetector()
let lineGenerator = GPUImageLineGenerator()
lineGenerator.forceProcessingAtSize(image.size)
lineGenerator.setLineColorRed(1.0,green: 0.0, blue: 0.0)
filter.linesDetectedBlock = { (lineArray:UnsafeMutablePointer<GLfloat>, linesDetected:UInt, frameTime:CMTime) in
lineGenerator.renderLinesFromArray(lineArray, count:linesDetected, frameTime:frameTime)
}
stillImage.addTarget(filter)
let blendFilter = GPUImageAlphaBlendFilter()
blendFilter.forceProcessingAtSize(image.size)
let gammaFilter = GPUImageGammaFilter()
stillImage.addTarget(gammaFilter)
gammaFilter.addTarget(blendFilter)
lineGenerator.addTarget(blendFilter)
blendFilter.useNextFrameForImageCapture()
stillImage.processImage()
return filter.imageFromCurrentFramebuffer() // returns always nil <<
}
Must be something simple that i am missing but i simply am in "that rut" now. Thanks for understanding.
Update:
Sure enough it was simple, see my answer
Common sense not so common when you're following another guide.
I'll leave the question for it might help someone in the future with implementing the filter in Swift.
Changing:
filter.imageFromCurrentFramebuffer()
to:
blendFilter.imageFromCurrentFramebuffer()
Did it.