Search code examples
pythonopencvcomputer-visionartificial-intelligenceface-recognition

How to store multiple features for face and find distance?


I am working on a project based on the facial recognition and verification. I am using Siamese network to get the 128 vector of the face ( Embeddings ).

I am storing the encodings/embeddings of the person's face in the database and then checking or say matching the incoming face's encodings with the previously stored encodings to recognize the person.

To make a robust system, I have to store more than one encodings of the same person. When I have used only a single encoding vector, and matched with :

From face_recognition library (to get distance):

face_recognition.compare_faces( stored_list_of_encodings, checking_image_encodings )

That doesn't work all the time because I have only compared with a single encoding. To make a system sufficient for most cases, I want to store minimum 3 encodings of a same person and then compare with the new data.

Now the question: How to store multiple embeddings of a same person and then compare the distance?

I am using face_recognition as the library and Siamese Network for feature extraction.


Solution

  • Have you considered using an SVM classifier to classify the faces? So the input to the SVM classifier would be the vector of size 128. You can then compile a few of the vectors belonging to the face of a single person (3 in your case) and fit it to an SVM as a class. You can then do the same for different faces (classes).

    Then, when predicting a face, simply feed in the new vector and run

    svm.predict([..])
    

    I had a similar use-case for my project, but I was using Facenet instead as the feature extractor. Works perfectly.