Search code examples
iosmachine-learningcoremlcreateml

How to refine a CoreML image classifier model with an object detection model?


I have an image classifier model created with CreateML.

The labelling in the training set is roughly:

  • Image contains object A -> label a
  • Image contains object B -> label b
  • Image contains object C -> label c
  • Image contains object A + B -> label a
  • Image contains object A + B + C -> label c

You could say there is some "prioritization" of objects where object A has a higher priority than B, therefore label a should apply. The same with label c where object C has the highest priority.

This is clearly not optimal for the algorithm, so I would use an object identification algorithm which seems more appropriate. But I already have a huge data set with 100.000s of manually correctly classified images that would not be used to train the algorithm, and I would have to build a new training set from scratch for object detection which is obviously a cost issue and won't reach a data set size like the existing one anytime soon.

Is there a way I can leverage the existing data set to build an image classification model and augment it with an object detection model that I build manually from scratch but may only have a few 100 items in the data set?


Solution

  • One way to solve this is to use multi-label classification, where the model tells you the probability that A is present, the probability that B is present, and the probability that C is present, but these are independent from one another. Unfortunately, Create ML cannot train this kind of model.