I'm trying to run a CoreML model using AVCaptureSession.
When I put the same image as input of my CoreML model, it gives me the same result every time. But when using the image given by the function :
- (void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection {
__block CIImage* ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
dispatch_sync(dispatch_get_main_queue(), ^{
VNImageRequestHandler* handler = [[VNImageRequestHandler alloc] initWithCIImage:ciimage options:@{}];
[handler performRequests:@[self.coreRequest] error:nil];
});
}
It doesn't give me exactly the same result even if I don't move my phone and the background is always the same too. . (To be clear, my phone is on my table, the camera is looking to the floor on my room, nothing is moving).
I have tried to compare the two image pixel by pixel (previous and new image) and there are different.
I want to understand why these images are different ?
Thanks,
Camera noise, most likely. The picture you get from a camera is never completely stable. The noise creates small differences in pixel values, even if the camera points at the same thing. These small differences can have a big influence on the predictions.