Search code examples
iosswiftcomputer-visionface-detectionvision-api

Convert Vision boundingBox from VNFaceObservation to rect to draw on image


I am trying to use VNDetectFaceRectanglesRequest from the new Vision API to detect faces on images. Then, I draw a red rectangle on each detected face.

But I'm having an issue converting the boundingBox from VNFaceObservation into a CGRect. It seems that my only problem is a the y origin.


Here's my code:

let request=VNDetectFaceRectanglesRequest{request, error in
    var final_image=UIImage(ciImage: image)
    if let results=request.results as? [VNFaceObservation]{
        for face_obs in results{
          UIGraphicsBeginImageContextWithOptions(final_image.size, false, 1.0)
          final_image.draw(in: CGRect(x: 0, y: 0, width: final_image.size.width, height: final_image.size.height))

          var rect=face_obs.boundingBox
/*/*/*/ RESULT 2 is when I uncomment this line to "flip" the y  /*/*/*/
          //rect.origin.y=1-rect.origin.y
          let conv_rect=CGRect(x: rect.origin.x*final_image.size.width, y: rect.origin.y*final_image.size.height, width: rect.width*final_image.size.width, height: rect.height*final_image.size.height)

          let c=UIGraphicsGetCurrentContext()!
          c.setStrokeColor(UIColor.red.cgColor)
          c.setLineWidth(0.01*final_image.size.width)
          c.stroke(conv_rect)

          let result=UIGraphicsGetImageFromCurrentImageContext()
          UIGraphicsEndImageContext()

          final_image=result!
        }
    }
    DispatchQueue.main.async{
        self.image_view.image=final_image
    }
}


let handler=VNImageRequestHandler(ciImage: image)
DispatchQueue.global(qos: .userInteractive).async{
    do{
        try handler.perform([request])
    }catch{
        print(error)
    }
}

Here are the results so far.

Result 1 (without flipping the y) Result 1

Result 2 (flipping the y) Result 2



Solution

I found a solution on my own for the rect.

let rect=face_obs.boundingBox
let x=rect.origin.x*final_image.size.width
let w=rect.width*final_image.size.width
let h=rect.height*final_image.size.height
let y=final_image.size.height*(1-rect.origin.y)-h
let conv_rect=CGRect(x: x, y: y, width: w, height: h)

However, I marked @wei-jay's answer as the good one since it's more classy.


Solution

  • You have to do the transition and scale according to the image. Example

    func drawVisionRequestResults(_ results: [VNFaceObservation]) {
        print("face count = \(results.count) ")
        previewView.removeMask()
    
        let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.view.frame.height)
    
        let translate = CGAffineTransform.identity.scaledBy(x: self.view.frame.width, y: self.view.frame.height)
    
        for face in results {
            // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
            let facebounds = face.boundingBox.applying(translate).applying(transform)
            previewView.drawLayer(in: facebounds)
        }
    }