Search code examples
computer-visionface-detectionface-recognitioneigenvector

What are Eigenfaces generated from?


I'm working with eigenfaces for a facial recognition program I am writing. I have a couple questions about how eigenfaces are actually generated:

  1. Are they generated from a lot of pictures of different people, or a lot of pictures of the same person?

  2. Do these people need to include the people you want to recognize? If not, then how would any type of comparison be made?

  3. Is an eigenface determined for every image you provide, or do multiple pictures go towards creating one eigenface?

This is all about the generation or learning phase of the eigenfaces. Thanks for any help or pointing me in the right direction!


Solution

    1. Many different people are highly necessary to achieve support to cover all possible faces.
    2. No need for that, although you need to represent all dimensions. A good analogy is to barycentric coordinates for describing the location of a point in a triangle. You are getting a weighted average to the vertices. If you don't have sufficient vector support (for example, only having two points), then you can't describe points that lie outside the line no matter how you play with the weighted average. This is essentially bjoernz's point for Caucasian vs. Asian faces. Note that this analogy is a gross simplification. The weights in eigenfaces are actually more like PCA or Fourier coefficients.
    3. Each image gets turned into an eigenface which is a vector of principal components.

    Nota bene: you need very good registration of the faces. Eigenfaces is notoriously bad about translation/rotation invariance. Your results are likely to be terrible unless you register well. The original Turk and Pentland paper was groundbreaking not just because of the technique but for the scale and quality of data set they gathered which enabled said technique.