I am programming a face recognition program using OpenCV.
When generating the eigenfaces:
I am talking about the eigenfaces generation, this is the "learning" step.
And how many photos do I need to use to have decent accuracy ? More like 20, or 2000 ?
Thanks
Eigenfaces works by projecting the faces into a particular "face basis" using principal component analysis or PCA. The basis does not have to include photos of people you want to recognize.
Instead, I would encourage you to train based upon a big database (at least 10k faces) that is well registered (eigenfaces doesn't work well with images that are shifted). The original paper by Turk and Pentland was remarkable partly due to the large pin registered face database they released. I would also say that try to have the lighting normalized to the same between the database and your test inputs.
In terms of testing, first 20 components should be sufficient to reconstruct a human recognizable face and first 100 components should be enough to discriminate between any two face for essentially arbitrarily large dataset.