Trying to get a simple Proof of concept going with Apple's face detection API. I've looked at a couple of other examples including Apple's SquareCam, and this one https://github.com/jeroentrappers/FaceDetectionPOC
based on these, it seems like I am following the correct pattern to get the APIs going, but I am stuck. No matter what I do, the CIDetector for my face detector is always nil!!!
I would seriously appreciate any help, clues - hints - suggestions!
-(void)initCamera{
session = [[AVCaptureSession alloc]init];
AVCaptureDevice *device;
/*
if([self frontCameraAvailable]){
device = [self frontCamera];
}else{
device = [self backCamera];
}*/
device = [self frontCamera];
isUsingFrontFacingCamera = YES;
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if(input && [session canAddInput:input]){
[session addInput:input];
}else{
NSLog(@"Error %@", error);
//make this Dlog...
}
videoDataOutput = [[AVCaptureVideoDataOutput alloc]init];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoDataOutput setVideoSettings:rgbOutputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
[[videoDataOutput connectionWithMediaType:AVMediaTypeVideo]setEnabled:YES];
if ([session canAddOutput:videoDataOutput]) {
[session addOutput:videoDataOutput];
}
[self embedPreviewInView:self.theImageView];
[session startRunning];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
if(attachments){
CFRelease(attachments);
}
UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
NSDictionary *imageOptions = @{CIDetectorImageOrientation:[self exifOrientation:curDeviceOrientation] };
NSDictionary *detectorOptions = @{CIDetectorAccuracy: CIDetectorAccuracyLow};
CIDetector *faceDetector = [CIDetector detectorOfType:CIFeatureTypeFace context:nil options:detectorOptions];
NSArray *faceFeatures = [faceDetector featuresInImage:ciImage options:imageOptions];
if([faceFeatures count]>0){
NSLog(@"GOT a face!");
NSLog(@"%@", faceFeatures);
}
dispatch_async(dispatch_get_main_queue(), ^(void) {
//NSLog(@"updating main thread");
});
}
I'm assuming you're using this article, because I was too and had the same problem. There's actually a bug in his code. The CIDetector instantiation should look like:
CIDetector *smileDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:context
options:@{CIDetectorTracking: @YES,
CIDetectorAccuracy: CIDetectorAccuracyLow}];
Note that the detector type is CIDetectorTypeFace, rather than CIDetectorSmile. CIDetectorSmile is a feature option rather than a detector type, so to extract just the smiles (and not all the faces), use this code:
NSArray *features = [smileDetector featuresInImage:image options:@{CIDetectorSmile: @YES}];