I have this code, which is an extension of AVCaptureMetadataOutputObjectsDelegate:
internal func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard let captureSession = captureSession else { return }
captureSession.stopRunning()
if let metadataObject = metadataObjects.first {
guard let readableObject = metadataObject as? AVMetadataMachineReadableCodeObject else { return }
guard let stringValue = readableObject.stringValue else { return }
AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
found(code: stringValue)
}
}
And is called when it 'views' a qr code:
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = [.qr]
What I want to do is to add a new functionality, which is, as soon as I open the camera, to know what is the luminosity of the back camera.
I found everywhere that they use this:
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection)
But it seems to me that it is not in AvFoundation anymore.
I assume the "luminosity" of camera you mention is some kind of a light level metric. I know several ways to measure it.
I imagine, there must already be defined a videoDevice somewhere in your code: let videoDevice: AVCaptureDevice
. If you don't store it separately, get it from videoInput.videoDevice
.
Check videoDevice.iso
. The lower is value - the brighter lighting conditions are. It's a KVO property, so you can observe it's change in realtime.
Check videoDevice.exposureDuration
. Same: lower value → brighter lighting conditions. Exposure duration is basically what the iOS system camera adjusts for better night mode shots.
As you mentioned, you could also get a realtime pixel buffer from your camera to analyze. Like to build a histogram and compare light pixels to dark etc.
In your camera class:
/// You already have the session
private let session = AVCaptureSession()
/// Define a video output (probably you did that already,
/// otherwise how would your camera scan QRs at all)
private let videoOutput = AVCaptureVideoDataOutput()
/// Define a queue for sample buffer
private let videoSampleBufferQueue = DispatchQueue(label: "videoSampleBufferQueue")
Then add output to session:
if session.canAddOutput(videoOutput){
session.addOutput(videoOutput)
}
videoOutput.setSampleBufferDelegate(self, queue: videoSampleBufferQueue)
And implement AVCaptureVideoDataOutputSampleBufferDelegate
:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// Handle the pixelBuffer the way you like
}