My app runs Vision on a CoreML model. The camera frames the machine learning model runs on are from an ARKit sceneView (basically, the camera). I have a method that's called loopCoreMLUpdate()
that continuously runs CoreML so that we keep running the model on new camera frames. The code looks like this:
import UIKit
import SceneKit
import ARKit
class MyViewController: UIViewController {
var visionRequests = [VNRequest]()
let dispatchQueueML = DispatchQueue(label: "com.hw.dispatchqueueml") // A Serial Queue
override func viewDidLoad() {
super.viewDidLoad()
// Setup ARKit sceneview
// ...
// Begin Loop to Update CoreML
loopCoreMLUpdate()
}
// This is the problematic part.
// In fact - once it's run there's no way to stop it, is there?
func loopCoreMLUpdate() {
// Continuously run CoreML whenever it's ready. (Preventing 'hiccups' in Frame Rate)
dispatchQueueML.async {
// 1. Run Update.
self.updateCoreML()
// 2. Loop this function.
self.loopCoreMLUpdate()
}
}
func updateCoreML() {
///////////////////////////
// Get Camera Image as RGB
let pixbuff : CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
if pixbuff == nil { return }
let ciImage = CIImage(cvPixelBuffer: pixbuff!)
// Note: Not entirely sure if the ciImage is being interpreted as RGB, but for now it works with the Inception model.
// Note2: Also uncertain if the pixelBuffer should be rotated before handing off to Vision (VNImageRequestHandler) - regardless, for now, it still works well with the Inception model.
///////////////////////////
// Prepare CoreML/Vision Request
let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
// let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage!, orientation: myOrientation, options: [:]) // Alternatively; we can convert the above to an RGB CGImage and use that. Also UIInterfaceOrientation can inform orientation values.
///////////////////////////
// Run Image Request
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
}
As you can see the loop effect is created by a DispatchQueue
with the label com.hw.dispatchqueueml
that keeps calling loopCoreMLUpdate()
. Is there any way to stop the queue once CoreML is not needed anymore? Full code is here.
I suggest instead o running coreML model here in viewDidLoad, you can use ARSessionDelegate function for the same. func session(_ session: ARSession, didUpdate frame: ARFrame) method to get the frame, you can set the flag, here to enable when you want the the model to work and when you dont. Like this below:
func session(_ session: ARSession, didUpdate frame: ARFrame) { // This is where we will analyse our frame
// We return early if currentBuffer is not nil or the tracking state of camera is not normal
// TODO: - Core ML Functionality Commented
guard isMLFlow else { //
return
}
currentBuffer = frame.capturedImage
guard let buffer = currentBuffer, let image = UIImage(pixelBuffer: buffer) else { return }
<Code here to load model>
CoreMLManager.manager.updateClassifications(for: image)
}