Search code examples
iosiphonevideoavfoundationvideo-processing

How to add camera preview view to three custom uiview's in ios swift


I need to create an app with video-processing feature.

My requirement is have to create 3 views with camera preview layer. First view should display the original capturing video, second should display the flip of original capturing video and last view should display the inverted colors of original capturing video.

I started to develop with this requirement. First I created 3 views and Camera Capture required properties

    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!
    
    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!

enter image description here

Now i called the AVCaptureVideoDataOutputSampleBufferDelegate to start the camera session.

extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }
    
    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }
        
        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true
        
        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect
        
        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)
        
        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)
        
        session.startRunning()
    }
    
    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }
    
    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }
    
}

Here I used CAReplicatorLayer to show capturing videos in 3 views. I specified self.replicationLayer.instanceCount as 1, Then i got the output like this.

enter image description here

If I specified self.replicationLayer.instanceCount as 3, Then i got the output like this.

enter image description here

So guide me how to show the capturing videos in 3 different views. And give some ideas to convert original capturing videos to flip and invert colors. Thanks in advance.


Solution

  • Finally i found the answer with the help of JohnnySlagle/Multiple-Camera-Feeds code.

    I created three views like

    @property (weak, nonatomic) IBOutlet UIView *video1;
    @property (weak, nonatomic) IBOutlet UIView *video2;
    @property (weak, nonatomic) IBOutlet UIView *video3;
    

    Then slightly changed setUpFeedViews

    - (void)setupFeedViews {
        NSUInteger numberOfFeedViews = 3;
    
        for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
            VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
            feedView.tag = i+1;
            switch (i) {
                case 0:
                    [self.video1 addSubview:feedView];
                    break;
                case 1:
                    [self.video2 addSubview:feedView];
                    break;
                case 2:
                    [self.video3 addSubview:feedView];
                    break;
                default:
                    break;
            }
            [self.feedViews addObject:feedView];
        }
    }
    

    Then applied filters in AVCaptureVideoDataOutputSampleBufferDelegate

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
        CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
    
        // update the video dimensions information
        _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
    
        CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
        CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
    
        CGRect sourceExtent = sourceImage.extent;
    
        CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
    
    
        for (VideoFeedView *feedView in self.feedViews) {
            CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
            // we want to maintain the aspect radio of the screen size, so we clip the video image
            CGRect drawRect = sourceExtent;
            if (sourceAspect > previewAspect) {
                // use full height of the video image, and center crop the width
                drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
                drawRect.size.width = drawRect.size.height * previewAspect;
            } else {
                // use full width of the video image, and center crop the height
                drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
                drawRect.size.height = drawRect.size.width / previewAspect;
            }
            [feedView bindDrawable];
    
            if (_eaglContext != [EAGLContext currentContext]) {
                [EAGLContext setCurrentContext:_eaglContext];
            }
    
            // clear eagl view to grey
            glClearColor(0.5, 0.5, 0.5, 1.0);
            glClear(GL_COLOR_BUFFER_BIT);
    
            // set the blend mode to "source over" so that CI will use that
            glEnable(GL_BLEND);
            glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
            // This is necessary for non-power-of-two textures
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
            if (feedView.tag == 1) {
                if (sourceImage) {
                    [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
                }
            } else if (feedView.tag == 2) {
                sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
                sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
                if (sourceImage) {
                    [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
                }
            } else {
                CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
                [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
                CIImage *invertImage = [effectFilter outputImage];
                if (invertImage) {
                    [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
                }
            }
            [feedView display];
        }
    }
    

    Thats it. Successfully it meets my requirement.