Search code examples
iosswiftvideoavfoundationavvideocomposition

Exporting AVAsset second time makes video blank


I am stitching multiple video files into one using AVMutableComposition() adding tracks like this:

let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let soundtrackTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)

private var insertTime = CMTime.zero
     
for videoAsset in arrayVideos {
       try! compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: insertTime)
       try! soundtrackTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: insertTime)
            
       insertTime = CMTimeAdd(insertTime, videoAsset.duration)
}

then exporting them using AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetMediumQuality) into .mov file.

That saves the stitched video to url, which I can access using AVAsset and display to user. After that I am trying to add image overlay to the video and export it again.

In this second method, I instantiate the AVAsset from url AVAsset(url: fileUrl). And create new AVMutableComposition(). I add video and audio track to the composition from the asset:

    compositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
    compositionTrack.insertTimeRange(timeRange, of: asset.tracks(withMediaType: .video)[], at: .zero)
...

Then I add overlay to the video using layers and AVVideoCompositionCoreAnimationTool() like this:

 let videoLayer = CALayer()
 videoLayer.frame = CGRect(origin: .zero, size: videoSize)
 let overlayLayer = CALayer()
 overlayLayer.frame = CGRect(origin: .zero, size: videoSize)
        
 overlayLayer.contents = watermark.cgImage
 overlayLayer.contentsGravity = .resizeAspect
        
 let outputLayer = CALayer()
 outputLayer.frame = CGRect(origin: .zero, size: videoSize)
 outputLayer.addSublayer(videoLayer)
 outputLayer.addSublayer(overlayLayer)
        
 let videoComposition = AVMutableVideoComposition()
 videoComposition.renderSize = videoSize
 videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
 videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: outputLayer)
        
 let instruction = AVMutableVideoCompositionInstruction()
 instruction.timeRange = CMTimeRange(start: .zero, duration: asset.duration)
 videoComposition.instructions = [instruction]
 let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: assetTrack)
 layerInstruction.setTransform(assetTrack.preferredTransform, at: .zero)
 instruction.layerInstructions = [layerInstruction]

and then I export the video same way as in the first export.

The issue is when I am combining this. If I only export some sample video using the second method, overlay to the video is added and everything is as expected. If I stitch the videos using the first method, videos are then stitched perfectly. However when I combine these methods, the resulting video is black blank screen (audio and the overlay is good, the resulting duration of video also fits).


Solution

  • The issue had to do probably something with AVVideoCompositionCoreAnimationTool(). I was able to solve this by using another technique for adding overlay to video in the second function. Instead of stacking layers using AVVideoCompositionCoreAnimationTool(), I've used a CIFilter like so:

    let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
    let watermarkImage = CIImage(image: watermark)!
    let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in
        let source = filteringRequest.sourceImage
        watermarkFilter.setValue(source, forKey: kCIInputBackgroundImageKey)
        let widthScale = source.extent.width/watermarkImage.extent.width
        let heightScale = source.extent.height/watermarkImage.extent.height
        watermarkFilter.setValue(watermarkImage.transformed(by: .init(scaleX: widthScale, y: heightScale)), forKey: kCIInputImageKey)
        filteringRequest.finish(with: watermarkFilter.outputImage!, context: nil)
        }