Search code examples
swiftobject-detectioncreateml

Use Create ML object detection model in swift


Hello I have created an object detection model in create ML and imported it to my swift project but I can't figure out how to use it. Basically i'm just looking to give the model an input and then receive an output. I have opened the Ml model prediction tab and found the input and output variabels but i don't know how to implement it code wise. I have searched on the internet for an answer and found multiple code snippets for running ml models but I can't get them to work.

This is the ML Model: ML Model predictions

This is the code I have tried:

let model = TestObjectModel()

guard let modelOutput = try? model.prediction(imagePath: "images_(2)" as! CVPixelBuffer, iouThreshold: 0.5, confidenceThreshold: 0.5) else {
    fatalError("Unexpected runtime error.")
}

print(modelOutput)

When running the code i get this error:

error: Execution was interrupted, reason: EXC_BREAKPOINT (code=1, subcode=0x106c345c0).
The process has been left at the point where it was interrupted, use "thread return -x" to return to the state before expression evaluation.

Solution

  • Ok first to all you have to decide which type of Input you have declared.You can see it, when you click on your model in the project navigator.

    For example :

    let mlArray = try? MLMultiArray(shape: [1024],dataType: MLMultiArrayDataType.float32)
    

    mlArray![index] = x --> giving your array some data

    let input = TestObjectModel(input: mlArray!)
           do {
    
                      let options = MLPredictionOptions.init()
                      options.usesCPUOnly = true
                      let prediction = try? self. TestObjectModel.prediction(input: input, options: options)
    

    --> now you can use prediction which is your output

                       } catch let err {
                           fatalError(err.localizedDescription) // Error computing NN outputs error
                       }
    

    Another example for image as input for your model :

    do {
        if let resizedImage = resize(image: image, newSize: CGSize(width: 416, height: 416)), let pixelBuffer = resizedImage.toCVPixelBuffer() {
            let prediction = try model.prediction(image: pixelBuffer)
            let value = prediction.output[0].intValue
            print(value)
        }
    } catch {
        print("Error while doing predictions: \(error)")
    }
    
    
    func resize(image: UIImage, newSize: CGSize) -> UIImage? {
        UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
        image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
        let newImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        return newImage
    }
    extension UIImage {
        func toCVPixelBuffer() -> CVPixelBuffer? {
            let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
            var pixelBuffer : CVPixelBuffer?
            let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
            guard (status == kCVReturnSuccess) else {
                return nil
            }
    
            CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
            let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
    
            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
            let context = CGContext(data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
    
            context?.translateBy(x: 0, y: self.size.height)
            context?.scaleBy(x: 1.0, y: -1.0)
    
            UIGraphicsPushContext(context!)
            self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
            UIGraphicsPopContext()
            CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    
            return pixelBuffer
        }
    }