Search code examples
azureazure-cognitive-servicesface-api

How much of the picture should I transfer to add a face to a person?


When using the face api of Microsoft azure, I can call the detect function to detect faces on an image. I get back a face id which I can use to have azure identify the face based on my existing persons.

However when I wish to add that face to a person in order to identify that person in the future, I have to send the image again for azure to add it. It is not enough to use the reference to the previously detected face. I assume the neural network trains specially on the face and needs more information.

I would like to reduce the amount of data I send over the network. So my idea is to only send the part of the picture where the face is. I know where it is because in the detect call azure told me a rectangle around it.

Now from my own experience of how I as a human look at pictures, I assume it would help the neural network to see a bit more around the person. To see the neck, to see that something at the edge of the face rectangoe is not actually part of it, but continues outside of the image. It would help my human brain, it might help the neural network.

How much more than the face itself should I send with the request? 10% extra of each dimension? Does anyone have experience with that?


Solution

  • Based on my test, you can just cut the face rectangle that Azure Face API provided you on your image to add a face to a persion. This is my test pic and I have marked up the face rectangle that Azure provided me on it :

    enter image description here

    I cut this the face rectangle as another pic :

    enter image description here

    And use this pic to add a face to a person successfully : enter image description here

    Side photo : enter image description here