I am currently having some huge troubles getting what I want from AVCapture and AVCaptureVideoPreviewLayer etc.
I am currently creating an app (available for Iphone devices but would be better if it would also works on ipad) where I want to put a small preview of my camera in the middle of my view as shown in this picture :
To do that, I want to keep the ratio of my camera so I used this configuration :
rgbaImage = nil;
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;
AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;
NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )
{
[[[UIAlertView alloc] initWithTitle:NSLocalizedString(@"NoCameraAuthorizationTitle", nil)
message:NSLocalizedString(@"NoCameraAuthorizationMsg", nil)
delegate:self
cancelButtonTitle:NSLocalizedString(@"OK", nil)
otherButtonTitles:nil] show];
return;
}
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];
connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];
NSError *errorLock;
if ([device lockForConfiguration:&errorLock])
{
// Frame rate
device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
CGPoint point = CGPointMake(0.5, 0.5);
if ([device isAutoFocusRangeRestrictionSupported])
{
device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
}
if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
{
[device setFocusPointOfInterest:point];
[device setFocusMode:focusMode];
}
if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
{
[device setExposurePointOfInterest:point];
[device setExposureMode:exposureMode];
}
if ([device isLowLightBoostSupported])
{
device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[device unlockForConfiguration];
}
if (device.isFlashAvailable)
{
[device lockForConfiguration:nil];
[device setFlashMode:AVCaptureFlashModeOff];
[device unlockForConfiguration];
if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
[device lockForConfiguration:nil];
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];
[session commitConfiguration];
As you can see I am using the AVLayerVideoGravityResizeAspectFill properties to ensure that I do have the proper ratio.
My trouble starts here as I tried many things but never really succeeded. My goal is to get the picture equivalent to was the user can see in the previewLayer. Knowing that the video frame gives bigger image than the image you can see in the preview.
I tried 3 methods :
1) Using personal computing : as I know both the video frame size and my screen size and layer size and position, I tried to compute the ratio and use it to compute the equivalent position in the video frame. I actually found out that the video frame (sampleBuffer) is in pixels while the position I get from mainScreen bounds is apple mesure and has to be multiple by a ratio to get it in pixels which is my ratio assuming that the video frame size is the actual device full screen size.
--> This actually gave me a really good result on my IPAD, both height and width are good but the (x,y) origin is a bit moved from the original... (detail : actually if I remove 72 pixel from the position I find I get the good output)
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect rect = image.extent;
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(@"%f, %f ---",screenWidth, screenHeight);
float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(@"Ratio w :%f h:%f ---",myRatioW, myRatio);
CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(@"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y * myRatio);
NSLog(@"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(@"%f", previewLayer.frame.size.height * ( rect.size.height / screenHeight ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);
image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(@"--------------------------------------------");
[captureViewControler sendToPreview:its];
}
2) Using StillImage capture : This method was actually working as long as I was on an IPAD. But the really trouble is that I am using those crop frames to feed an image library and the methods captureStillImageAsynchronouslyFromConnection is calling the system sounds for a picture (I read a lot about "solutions" like call another sound to avoid it etc etc but not working and actually not solving the freeze that goes with it on iphone 6) so this method seems inappropriate.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connect inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connect;
break;
}
}
if (videoConnection) { break; }
}
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
NSLog(@"Take picture failed");
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(@"image cropped : %@", NSStringFromCGRect(outputRect));
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(@"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
NSLog(@"final cropped : %@", NSStringFromCGRect(cropRect));
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
its = [ImageUtils rotateUIImage:takenImage];
image = [[CIImage alloc] initWithImage:its];
}
3) Using metadataOuput with a ratio : This is actually not working at all but I thought it would help me the most as it works with this on the stillImage process (using the metadataOutputRectOfInterestForRect result to get the pourcentage and then combine it with the ratio). I wanted to use this and add the ratio difference between the pictures to get the correct output.
CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);
float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(@"avant cropped : %@", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(@"final cropped : %@", NSStringFromCGRect(outputRect));
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(@"takenImage : %@", NSStringFromCGSize(takenImage.size));
CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(@"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
I hope someone will be able to help me with this. Thanks a lot.
I finally found the solution using this code. My error was to try to use a ratio between images and not considering that metadataOutputRectOfInterestForRect returns a percentage value which doesn't need to be changed for the new other image.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
outputRect.origin.y = outputRect.origin.x;
outputRect.origin.x = 0;
outputRect.size.height = outputRect.size.width;
outputRect.size.width = 1;
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
}