We are using AVCaptureDevice
on IOS to scan QR codes. We pass the output of the camera using AVCaptureMetadataOutput
to the code to recognize the QR code, and currently we also display the camera as a separate view over our Open GL view. However we now want other graphics to appear over the camera preview, so we would like to be able to get the camera data loaded onto one of our Open GL textures.
So, is there a way to get the raw RGB data from the camera
This is the code (below) we're using to initialise the capture device and views.
How could we modify this to access the RGB data so we can load it onto one of our GL textures ? We're using C++/Objective C
Thanks
Shaun Southern
self.captureSession = [[AVCaptureSession alloc] init];
NSError *error;
// Set camera capture device to default and the media type to video.
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Set video capture input: If there a problem initialising the camera, it will give am error.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input)
{
NSLog(@"Error Getting Camera Input");
return;
}
// Adding input souce for capture session. i.e., Camera
[self.captureSession addInput:input];
AVCaptureMetadataOutput *captureMetadataOutput = [[AVCaptureMetadataOutput alloc] init];
// Set output to capture session. Initalising an output object we will use later.
[self.captureSession addOutput:captureMetadataOutput];
// Create a new queue and set delegate for metadata objects scanned.
dispatch_queue_t dispatchQueue;
dispatchQueue = dispatch_queue_create("scanQueue", NULL);
[captureMetadataOutput setMetadataObjectsDelegate:self queue:dispatchQueue];
// Delegate should implement captureOutput:didOutputMetadataObjects:fromConnection: to get callbacks on detected metadata.
[captureMetadataOutput setMetadataObjectTypes:[captureMetadataOutput availableMetadataObjectTypes]];
// Layer that will display what the camera is capturing.
self.captureLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[self.captureLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
gCameraPreviewView= [[[UIView alloc] initWithFrame:CGRectMake(gCamX1, gCamY1, gCamX2-gCamX1, gCamY2-gCamY1)] retain];
[self.captureLayer setFrame:gCameraPreviewView.layer.bounds];
[gCameraPreviewView.layer addSublayer:self.captureLayer];
UIViewController * lVC = [[[UIApplication sharedApplication] keyWindow] rootViewController];
[lVC.view addSubview:gCameraPreviewView];
You don't need to directly access rgb camera frames to make it as a texture because IOS supports a texture cache which is faster than you.
- (void) writeSampleBuffer:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType pixel:(CVImageBufferRef)cameraFrame time:(CMTime)frameTime;
in the callback method, you can generate texture using those parameter and functions below
CVOpenGLESTextureCacheCreate(...)
CVOpenGLESTextureCacheCreateTextureFromImage(...)