I trained a CNN classification model using RGB images as input and it produces 1x7 output with probabilities of class labels(7 different classes). I have converted the model from keras .h5 to coreML. I have seen different applications and tried both of them with and without class labels defined. They did not cause any issue while converting. However none of them work in IOS. Both models crash when I call below line:
guard let result = predictionRequest.results as? [VNCoreMLFeatureValueObservation] else {
fatalError("model failed to process image")
}
Output definition of my both models are below. Could you please advice what is wrong with the model output. Do I have to add class labels or not? I am confused how to call the highest probable value. I have added entire classification code too. Please see below. Since I am a beginner in IOS, your help is greatly appreciated. Thanks a lot indeed.
Model output definition in IOS with class labels conversion:
/// Identity as dictionary of strings to doubles
lazy var Identity: [String : Double] = {
[unowned self] in return self.provider.featureValue(for: "Identity")!.dictionaryValue as! [String : Double]
}()
/// classLabel as string value
lazy var classLabel: String = {
[unowned self] in return self.provider.featureValue(for: "classLabel")!.stringValue
}()
Model output definition in IOS without class labels conversion:
init(Identity: MLMultiArray) {
self.provider = try! MLDictionaryFeatureProvider(dictionary: ["Identity" : MLFeatureValue(multiArray: Identity)])
}
Classification Code:
class ColorStyleVisionManager: NSObject {
static let shared = ColorStyleVisionManager()
static let MODEL = hair_color_class_labels().model
var colorStyle = String()
var hairColorFlag: Int = 0
private lazy var predictionRequest: VNCoreMLRequest = {
do{
let model = try VNCoreMLModel(for: ColorStyleVisionManager.MODEL)
let request = VNCoreMLRequest(model: model)
request.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
return request
} catch {
fatalError("can't load Vision ML Model")
}
}()
func predict(image:CIImage) -> String {
guard let result = predictionRequest.results as? [VNCoreMLFeatureValueObservation] else {
fatalError("model failed to process image")
}
let firstResult = result.first
if firstResult?.featureName == "0" {
colorStyle = "Plain Coloring"
hairColorFlag = 1
}
else if firstResult?.featureName == "1" {
colorStyle = "Ombre"
hairColorFlag = 2
}
else if firstResult?.featureName == "2" {
colorStyle = "Sombre"
hairColorFlag = 2
}
else if firstResult?.featureName == "3" {
colorStyle = "HighLight"
hairColorFlag = 3
}
else if firstResult?.featureName == "4" {
colorStyle = "LowLight"
hairColorFlag = 3
}
else if firstResult?.featureName == "5" {
colorStyle = "Color Melt"
hairColorFlag = 5
}
else if firstResult?.featureName == "6" {
colorStyle = "Dip Dye"
hairColorFlag = 4
}
else {}
let handler = VNImageRequestHandler(ciImage: image)
do {
try handler.perform([predictionRequest])
} catch {
print("error handler")
}
return colorStyle
}
}
I have found out two different problems in my code. In order to ensure that my model correctly converted to mlmodel, I created a new classification mlmodel by using Apple's CreateML tool. By the way it is fantastic even though the accuracy seems lower than my original model. I compared the output and input types of the model and seems my mlmodel is correct too. Then I used this model and gave it another try. It crashed again. I wasn't so sure what prediction result I have to expect whether "VNClassificationObservation" or "VNCoreMLFeatureValueObservation". I changed to classificationobservation. It crashed again. Then I realized that my handler definition was below the crash line and I moved it to upper portion. Then woola. It worked. I double checked by changing the FeatureValueObservation and it crashed again. So two problems are solved. Please see the correct code below.
I strongly recommend to use CreateML tool to confirm your model conversion work fine for debugging purposes. It is just a few minutes job.
class ColorStyleVisionManager: NSObject {
static let shared = ColorStyleVisionManager()
static let MODEL = hair_color_class_labels().model
var colorStyle = String()
var hairColorFlag: Int = 0
private lazy var predictionRequest: VNCoreMLRequest = {
do{
let model = try VNCoreMLModel(for: ColorStyleVisionManager.MODEL)
let request = VNCoreMLRequest(model: model)
request.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
return request
} catch {
fatalError("can't load Vision ML Model")
}
}()
func predict(image:CIImage) -> String {
let handler = VNImageRequestHandler(ciImage: image)
do {
try handler.perform([predictionRequest])
} catch {
print("error handler")
}
guard let result = predictionRequest.results as? [VNClassificationObservation] else {
fatalError("error to process request")
}
let firstResult = result.first
print(firstResult!)