Search code examples
swiftimagecoremlcreateml

Core ML : Creating Model with Create, works fine but how to control inputs that haven't been trained?


I am creating a Model with Create ML. I add 1000 pictures of 2 objects. 500 cats , 500 dog. The model works pretty good but when I have a waterfall imager something non Dog/cat related it returns 100% Dog for example. Any Idea how to handle this issue?

1) I read that some image classifiers let you supply a negative class: Meaning images that are not relevant with the images you I am looking for. Any idea how to do this with Create ML or some different tool?

2) Is it better instead of making my model, to re-train an existing model, by adding my images? Is that possible with Create ML? From what I read you can't. Any recommendations?

Since I am new in Core ML if you have any direction to point out it will be appreciated.

Thanks


Solution

  • If your classifier is only trained on two types of images such as cats and dogs, then you should only use it on pictures of cats and dogs. If you use it on any other picture, it will still predict cat or dog.

    If you want to make a classifier for cat / dog / anything else, then you need to add a third category with pictures of things that are not cats or dogs.

    Usually this category will have many more pictures in it than the other two categories (since there are lots of things that are not cats or dogs), causing a class imbalance. I'm not sure if Create ML can compensate for that imbalance.