Search code examples
c++opencvface-recognition

OpenCV get faces from image and predict with model


The piece of code which retrieves faces from grayscale image (already converted to cv::Mat) works oddly, what I'm doing wrong?

// in initializer list 
model(cv::face::FisherFaceRecognizer::create())
// ....
const cv::Mat grayscale = cv::imread("photo_15.jpeg",cv::IMREAD_GRAYSCALE);

std::vector<cv::Rect> faceCandidates;
m_cascade.detectMultiScale(grayscale, faceCandidates);

uint32 label = -1;         
double confidence = 0.0;
// this line for the testing purposes only
model->predict(grayscale, label, confidence);

this works fine : label refers to correct person and confidence within 10. but lets continue with this function code:

for (auto &faceCandidateRegion : faceCandidates) {
    cv::Mat faceResized;
    // size_ is a member and contains 1280x720 for my case, equal to model trained photos. 
    cv::resize( cv::Mat(grayscale, faceCandidateRegion), faceResized, cv::Size(size_.width(), size_.height()));

    // Recognize current face.
    m_model->predict(faceResized, label, confidence);
// ... other processing 

this piece of code works absolutely wrong: it always produces incorrect label and confidence is about ~45-46K even if I use a recognition photo from training photo set

any idea what I'm doing wrong here? for the testing : I've tried to perform this with fisher, eigen and lbph with the same wrong result

update: each model in the app is a few user's group, where each user presented by 2-6 photos , so this is a reason why I train a few users in the model

here is a code which trains the models:

std::size_t
Recognizer::extractFacesAndConvertGrayscale(const QByteArray &rgb888, std::vector<cv::Mat> &faces)
{
    cv::Mat frame = cv::imdecode(std::vector<char>{rgb888.cbegin(), rgb888.cend()}, cv::IMREAD_GRAYSCALE);
    std::vector<cv::Rect> faceCandidates;
    m_cascade.detectMultiScale(frame, faceCandidates);
    int label = 0;
    for(const auto &face : faceCandidates) {
        cv::Mat faceResized;
        cv::resize(cv::Mat{frame, face}, faceResized,
                   cv::Size(this->m_size.width(), this->m_size.height()));

        faces.push_back(faceResized);
    }

    return faceCandidates.size();
}

bool Recognizer::train(const std::vector<qint32> &labels, const std::vector<QByteArray> &rgb888s)
{
    if (labels.empty() || rgb888s.empty() || labels.size() != rgb888s.size())
        return false;

    std::vector<cv::Mat> mats = {};
    std::vector<int32_t> processedLabels = {};
    std::size_t i = 0;
    for(const QByteArray &data : rgb888s)
    {
        std::size_t count = this->extractFacesAndConvertGrayscale(data, mats);
        if (count)
            std::fill_n(std::back_inserter(processedLabels), count, labels[i++]);
    }
    m_model->train(mats, processedLabels);

    return true;
}

Solution

  • We resolved this in the comments, but for future reference:

    The fact that this line

    // this line for the testing purposes only
    model->predict(grayscale, label, confidence);
    

    had better confidence than

    // Recognize current face.
    m_model->predict(faceResized, label, confidence);
    

    occurred because the model was trained with non-cropped images, while the detector crops the faces.

    Rather than using the whole image with prediction, to match the input, the model should be trained with cropped faces:

    • The classifier performs independently of the size of the faces in the original image, due to the multiscale detection; i.e. size and position of the faces in the image become invariants.
    • Background does not interfere with classification. The original input had 16:9 aspect ratio, so at least the sides of the image would produce noise in the descriptors.