Search code examples
iosswiftavfoundationciimageavvideocomposition

CIImage gets resized when applying to CIFilter


I am trying to add a fullscreen watermark to my video.

Unfortunately, the watermark image (red rectangle) gets resized by the CIFilter and I don't find a way to make it fullscreen.

Is there a way how to set the size of the CIImage?

 let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
 let watermarkImage = CIImage(image: image)


 let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in


    let source = filteringRequest.sourceImage.clampedToExtent()

    watermarkFilter.setValue(source, forKey: "inputBackgroundImage")

    watermarkFilter.setValue(watermarkImage, forKey: "inputImage")
    let output = watermarkFilter.outputImage!

    filteringRequest.finish(with: output, context: nil)

}

Error

I have also tried scaling the CIImage up, but this doesn' work either.

func addImageToVideo(inputURL: URL, image: UIImage,  handler: @escaping (_ exportSession: AVAssetExportSession?)-> Void) {

        let mixComposition = AVMutableComposition()
        let asset = AVAsset(url: inputURL)
        let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
        let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

        let compositionVideoTrack:AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!

        do {
            try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
            compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
        } catch {
            print(error)
        }

        let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
        let watermarkImage = CIImage(image: image)


        //Filer method
        let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in

            let sourceImage = filteringRequest.sourceImage.clampedToExtent()

            var transform = CGAffineTransform.identity
            let scaleX = image.size.width / image.scale
            let scaleY = image.size.height / image.scale

            transform = transform.scaledBy(x: scaleX, y: scaleY)
            let transformFilter = CIFilter(name: "CIAffineClamp")!
            transformFilter.setValue( watermarkImage, forKey: "inputImage" )
            transformFilter.setValue( transform, forKey: "inputTransform")

            watermarkFilter.setValue(sourceImage, forKey: "inputBackgroundImage")
            watermarkFilter.setValue(transformFilter.outputImage, forKey: "inputImage")
            let output = watermarkFilter.outputImage!

            filteringRequest.finish(with: output, context: nil)

        }

        guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality) else {
            handler(nil)
            return
        }

        exportSession.outputURL = outputURL
        exportSession.outputFileType = AVFileType.mp4
        exportSession.shouldOptimizeForNetworkUse = true
        exportSession.videoComposition = videoComposition
        exportSession.exportAsynchronously { () -> Void in
            handler(exportSession)
        }
    }

I really don't know what to do, I have spent already weeks trying to simply render a watermark into a video, but unfortunately, this just doesn't want to work.


Solution

  • One way I use to adjust CIImage size is with CIAffineClamp filter, here is the sample I use:

    var transform = CGAffineTransform.identity
    let scaleX = sourceImage.extent.width / watermarkImage.extent.width
    let scaleY = sourceImage.extent.height / watermarkImage.extent.height
    transform = transform.scaledBy(x: scaleX, y: scaleY)
    
    let transformFilter = CIFilter(name: "CIAffineClamp", withInputParameters: [:]) {
        transformFilter.setValue(watermarkImage, forKey: kCIInputImageKey)
        transformFilter.setValue(NSValue(cgAffineTransform: transform), forKey: "inputTransform")
    }
    

    and then use your CISourceOverCompositing filter, just for inputImage use transformFilter.outputImage instead of watermarkImage

    Just play with scale parameters and you will get your result. If you will get unexpected results, it will mean that your source image is rotated, so add the following parameters in order to fix it:

    transform = transform.translatedBy(x: watermarkImage.extent.midY, y: watermarkImage.extent.midX)
    transform = transform.rotated(by: CGFloat(degrees * Double.pi / 180))
    transform = transform.translatedBy(x: -watermarkImage.extent.midX, y: -watermarkImage.extent.midY)
    transform = transform.translatedBy(x: 0.0, y: -watermarkImage.extent.x / scaleX)
    

    if you use iPad, camera output might be different as well, so your scale parameter shoud be different also, like this:

    if idiom == .pad {
         scaleX = sourceImage.extent.height / watermarkImage.extent.height
         scaleY = sourceImage.extent.width / watermarkImage.extent.width
    } else {
         scaleX = sourceImage.extent.height / watermarkImage.extent.width
         scaleY = sourceImage.extent.width / watermarkImage.extent.height
    }
    

    Front camera has different camera settings than rear camera, iPad has different setting than iPhone, so test everything thoroughly and apply transformations accordingly.