Hi I am using opencv and c++. I have a left profile and a front face of the same individual. If after determining the transformation matrix between the face images and applying that transformation matrix to the left image,when I superimpose it on the original frontal face,shouldn’t it have given me a face like this ,with everything aligned?I am obviously doing something wrong and getting this result
. Can anyone help with this please?Here is the link to that research paper http://www.utdallas.edu/~herve/abdi-ypaa-jmm2006.pdf
taken the images from one of your previous questions, manually finding pixel correspondences for the green dots only (so 3 correspondences) and this code
//cv::Mat perspectiveTransform = cv::getPerspectiveTransform(firstFacePositions, secondFacePositions) ;
cv::Mat affineTransform = cv::getAffineTransform(firstFacePositions, secondFacePositions) ;
std::cout << affineTransform << std::endl;
cv::Mat perspectiveTransform = cv::Mat::eye(3,3,CV_64FC1);
for(unsigned int y=0; y<2; ++y)
for(unsigned int x=0; x<3; ++x)
{
perspectiveTransform.at<double>(y,x) = affineTransform.at<double>(y,x);
}
std::cout << perspectiveTransform << std::endl;
cv::Mat warped1;
cv::warpPerspective(face1,warped1,perspectiveTransform,face2.size());
cv::imshow("combined",warped1/2 + face2/2);
I get the following result:
using the line cv::Mat perspectiveTransform = cv::getPerspectiveTransform(firstFacePositions, secondFacePositions);
instead and using the blue marker too, I get:
edit: C++ syntax, but that works with C# and Java similar I guess.