Search code examples
macosframeworkscoremlapple-vision

Apple Vision Framework: detect smiles or happy faces with observations?


I'm working on a project that uses the Vision Framework to detect faces in images and then uses a CoreML model to detect if the face is smiling. The problem is that the CoreML model file is nearly 500 MB. I don't want to bloat my app that much.

Since I'm already getting the VNFaceLandmarks2D observation data from the Vision framework, I thought I would try to use that to detect smiles.

Has anyone tried to use the VNFaceLandmarks2D data from the Vision framework to try to determine if the face is happy or smiling?

If so, how did you do it and how well did it work?

Thanks!


Solution

  • One solution is to use a smaller Core ML model file. It sounds like you're using a model that is based on VGGNet. There are several much smaller model architectures (between 4 and 16 MB) that have the same accuracy as VGGNet, and are therefore better suited for use on mobile devices.