I am working with OpenCV Contrib framework to preform facial recognition. My problem is that when isolating the faces (using a for loop) OpenCV crops the test image to only show the face (like a 40x40 box around a face). I need to resize this image to 3000x4000 (my training images are this size). My question is how would I resize the image (40x40 most of the time) to 3000x4000.
Here are a few resizing function I have tried and their corresponding error messages.
1.
(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The error message...
OpenCV(3.4.0-dev) Error: Bad argument (Wrong shapes for given matrices. Was size(src) = (1,48000000), size(W) = (12000000,1).) in subspaceProject, file /Users/mustafa/Desktop/OpenCVBuild/opencv/modules/core/src/lda.cpp, line 183
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.0-dev) /Users/mustafa/Desktop/OpenCVBuild/opencv/modules/core/src/lda.cpp:183: error: (-5) Wrong shapes for given matrices. Was size(src) = (1,48000000), size(W) = (12000000,1). in function subspaceProject
(lldb)
I set a breakpoint right before the predict
function (where the crash is occurring) and the image I am passing into the function seems to be 3000x4000.
2.
(UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
//0.0
UIGraphicsBeginImageContextWithOptions(sz, YES, 1.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Error Message...
OpenCV(3.4.0-dev) Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 12000000 elements, but got 9000000.) in predict, file /Users/mustafa/Desktop/OpenCVBuild/opencv_contrib/modules/face/src/fisher_faces.cpp, line 140
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.0-dev) /Users/mustafa/Desktop/OpenCVBuild/opencv_contrib/modules/face/src/fisher_faces.cpp:140: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 12000000 elements, but got 9000000. in function predict
(lldb)
This gives me a 3000x3000 image.
3.
- (UIImage *)scaleImageToSize:(CGSize)newSize image:(UIImage *)image {
CGRect scaledImageRect = CGRectZero;
CGFloat aspectWidth = newSize.width / image.size.width;
CGFloat aspectHeight = newSize.height / image.size.height;
CGFloat aspectRatio = MIN ( aspectWidth, aspectHeight );
scaledImageRect.size.width = image.size.width * aspectRatio;
scaledImageRect.size.height = image.size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0f;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0f;
UIGraphicsBeginImageContextWithOptions( newSize, NO, 0 );
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Error Message...
Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 12000000 elements, but got 108000000.) in predict, file /Users/mustafa/Desktop/OpenCVBuild/opencv_contrib/modules/face/src/fisher_faces.cpp, line 140
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.0-dev) /Users/mustafa/Desktop/OpenCVBuild/opencv_contrib/modules/face/src/fisher_faces.cpp:140: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 12000000 elements, but got 108000000. in function predict
(lldb)
This gives me a 9000x12000 image (sales it up by 3).
Many thanks in advance to any advice/help. This issue has been bothering me for days!
I solved this problem by using the native OpenCV resizing functions.
cv::resize(input, output, cv::Size(3000,4000));
Input and Output are both type cv::Mat