I'm fairly new to Core ML, but had heaps of fun playing around with it so far. I'm currently learning how to train models to do facial recognition by creating the model in playground and validating it's results. I save the .mlmodel and implement it in my app.
My issue is when I test it in playground is seems to have a very high degree of accuracy, but when I implement the same model using the same pictures in my app environment I get completely different results and it's pretty much unusable.
Here's some of the code I'm getting from the debug console.
[<VNClassificationObservation: 0x282deff00> BFB8D19B-40AE-45F9-B979-19C11A919DBE, revision 1, 0.778162 "Others", <VNClassificationObservation: 0x282dede60> 9E9B2AC8-3969-4086-B3B0-6ED6BEDFFE71, revision 1, 0.221838 "Me"]
Here it wrongly classifies an image of me as someone else, even though it correctly classified the same image in playground during testing. It seems the app itself is working fine, it's just the model that's suddenly off.
What am I missing here?
Thanks
Usually this happens when you load your images in a different way in the Playground vs. in your app. What I would do is make sure that the images you use are exactly the same in both cases. Not just the image content but also how they get loaded before you give them to the model.