Search code examples
kerasyolocoremlcoremltools

Yolo to keras to coreml : get confidence and coordinates as outputs


❓Question

Hi, Following steps were taken

  1. I trained yolo tiny on a custom data set with just one class

  2. Converted .weights(darknet) to .h5 (keras) (verified and keras model is working fine as well)

  3. Now when I convert Keras to core ml model I am not getting coordinates and confidence as outputs

Command used to convert to core ml

coremltools.converters.keras.convert(
'model_data/yolo.h5',
 input_names='image',
 class_labels=output_labels,
 image_input_names='image',
 input_name_shape_dict={'image': [None, 416, 416, 3]}
 )

Though I have checked a third party Yolo model converted to core ml giving coordinates and confidence

refer the screenshot:

3rd party Yolo model converted to core ml

enter image description here

my Yolo model converted to core ml

enter image description here

System Information

  • Keras==2.1.5

  • coremltools==3.3


Solution

  • Don't add this: class_labels=output_labels -- It will make your Core ML model into a classifier, which are treated special in Core ML. Since your model is an object detector, you don't want this.

    Look here for the rest: https://github.com/hollance/YOLO-CoreML-MPSNNGraph

    Basically, you need to decode the bounding box coordinates yourself in Swift or Obj-C code. You can add this to the model too, but in my experience that is slower. (Here is a blog post that shows how to do this for SSD, which is similar but not exactly the same as YOLO.)