On an iPad Retina Display (the device, not simulator), I had first used Apple's AVFoundation to take still pictures, but I switched to GPUImage because I wanted the ChromaKeyBlend feature. Got that running. BUT, the issue is that when I tap my Camera button, with AVFoundation, the camera appeared immediately and with GPUImage it takes FIVE seconds!
Is that loading time to be expected? I understand it has to be synchronous and can't be in the background.
So, what are others doing to speed that up, or are they just putting an activity indicator on the screen and make the user wait those five seconds?
Any tips would be appreciated. Thanks!
Well, I am loading an image into GPUImagePicture, but I think I have the pipeline right, and I really like the real-time adjustment of the sensitivity (with a slider). As I said, I tried to preprocess the image in the background and shaved off some seconds (this takes 5 sec still, even if I use a completely transparent image at the same size). Hope there is some secret sauce ;)
stillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationLandscapeLeft;
UIImage *inputImage = [UIImage imageNamed:@"RedCurtain-60-8x10.jpg"]; // 346kb
self.sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[self.sourcePicture processImage];
self.chromaKeyBlendFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[self.chromaKeyBlendFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[self.chromaKeyBlendFilter setThresholdSensitivity:0.35f];
[stillCamera addTarget:self.chromaKeyBlendFilter];
[self.sourcePicture addTarget:self.chromaKeyBlendFilter];
[self.chromaKeyBlendFilter addTarget:(GPUImageView *)self.videoPreviewView];