Search code examples
swiftuiimagecidetector

CIDetector Perspective And Crop in Swift


I've implemented a CIDetector in my app to detect rectangles on a image, but now how can i use the returned CGPoint's to crop the image so that i can display it back?

For the perspective i've tried applying the CIPerspectiveCorrection filter, but couldn't get it to work.

I've searched around and found some clues but couldn't find a solution in Swift.

How do i use the data provided by the CIDetector (detected rectangle) to fix perspective and crop my image?

For anyone who might not be familiar with what a CIDetectorTypeRectangle returns: it returns 4 CGPoint's bottomLeft,bottomRight,topLeft,topRight.


Solution

  • Here's what worked:

    func flattenImage(image: CIImage, topLeft: CGPoint, topRight: CGPoint,bottomLeft: CGPoint, bottomRight: CGPoint) -> CIImage {
    
        return image.applyingFilter("CIPerspectiveCorrection", withInputParameters: [
    
            "inputTopLeft": CIVector(cgPoint: topLeft),
            "inputTopRight": CIVector(cgPoint: topRight),
            "inputBottomLeft": CIVector(cgPoint: bottomLeft),
            "inputBottomRight": CIVector(cgPoint: bottomRight)
    
    
            ])
    
    }
    

    Wherever you detect your rectangle:

    UIGraphicsBeginImageContext(CGSize(width: flattenedImage!.extent.size.height, height: flattenedImage!.extent.size.width))
    
    UIImage(ciImage:resultImage!,scale:1.0,orientation:.right).draw(in: CGRect(x: 0, y: 0, width: resultImage!.extent.size.height, height: resultImage!.extent.size.width))
    
    let image = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()