Search code examples
ioscameraavfoundationavmutablecomposition

AVMutableComposition missing frames recorder with difference cameras of iPad


I use the well known PBJVision to record videos which then must be combined together. I use AVMutableComposition to combine videos using insertTimeRange(_:ofAsset:atTime:error:). It works well if the videos have been taken with the same camera. But for example if one is taken with back camera then take another using the front camera the latter video's video is missing. Looks like only the audio is added. Here is my code:

var error: NSError? = nil

    let composition = AVMutableComposition()

    var currentTime = kCMTimeZero

    for (index, videoURL) in enumerate(videoURLS) {
        let asset = AVURLAsset.assetWithURL(videoURL) as! AVAsset

        let success = composition.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: asset.duration),
            ofAsset: asset,
            atTime: currentTime,
            error: &error)
        if !success {
            if error != nil {
                println("timerange isnert error - \(error?.localizedDescription)")
            }
        }

        // add time till we get to the last video
        if index < videoURLS.count - 1 {
            currentTime = CMTimeAdd(currentTime, asset.duration)
        }
    }

    let outputURL = fileSystemHelper.temporaryStorageURLForExportSession()
    let fileManager = NSFileManager.defaultManager()
    fileManager.removeItemAtURL(outputURL, error: &error)
    if error != nil {
        println("export session file removal error - \(error?.localizedDescription)")
    }

    let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
    exportSession.outputFileType = AVFileTypeMPEG4
    exportSession.outputURL = outputURL

    let start = CMTimeMake(0, 1)
    let range = CMTimeRangeMake(start, composition.duration)
    //exportSession.timeRange = range

    exportSession.exportAsynchronouslyWithCompletionHandler { () -> Void in
        switch exportSession.status {
        case .Completed:
            self.fileSystemHelper.copyFileAtURL(outputURL, toURL: finalURL)

            self.appendURL = nil
            //  self.isRecording = false

            completion()
        case .Failed:
            println("fail error - \(exportSession.error.localizedDescription)")

            self.fileSystemHelper.removeFileAtURL(outputURL)
            self.appendURL = nil
            //self.isRecording = false

            println("failed to mix")
            //  delegate?.videoCaptureDidFinishRecordingVideoAtURL(URL, appended: appendURL == nil)

        default:
            println("something else happened, check code")
        }
    }

Solution

  • I found the answer myself over a night walk around the neighborhood just after I asked this question :) So the different cameras have different Maximum Possible Resolutions thus producing frames with different sizes that confuses the composition object. It uses the size of the first video and ignores frames of a video with a different size. So test and see what is the best possible resolution AVCaptureSessionPreset supported by both cameras on particular a device. Then use that preset in your video capture code and do not jump directly to using AVCaptureSessionPresetHigh.

    I hope this helps to other people too :)