Search code examples
ioscore-video

what is the relation between CVBuffer and CVImageBuffer


if you check this doc from apple

https://developer.apple.com/library/tvos/documentation/QuartzCore/Reference/CVBufferRef/index.html#//apple_ref/c/tdef/CVBufferRef

In the very second line , it says that : "A CVBuffer object can hold video, audio, or possibly some other type of data. You can use the CVBuffer programming interface on any Core Video buffer." This means that it can hold images as well . If yes , then why do we have CVImageBuffer.

I have just done some work on generating images from a video using AVAssetImageGenerator.Now I want to merge frames to make a video.SO,I have just started reading about this.

My current status:
1.Well right now I know that I need to use AVAssetWriter.
2.Then I need to provide it an input using AVAssetWriterInput.
3.I need to use some CV and CG classes.

So please help me know the reason for using CVImageBuffer if we have CVBuffer.I know that CVBuffer is abstract but then CVImageBuffer doesn't inherit from CVBuffer.This bamboozles me even more.


Solution

  • CVImageBuffer does inherit from CVBuffer, but only in that "simulated object orientation in c" way. That is, if you know that a CVBuffer's type is that of a certain subclass, then you can safely cast to that type, e.g.:

    if (CFGetTypeID(myCVBuffer) == CVMetalTextureGetTypeID()) {
        CVMetalTextureRef metalBuffer = myCVBuffer;
        // do something with metalBuffer
    }
    

    In fact you don't even need to cast (not even in swift!), as the CVBuffer types are all the same (typealiases in swift):

    typedef CVBufferRef CVImageBufferRef;
    typedef CVImageBufferRef CVPixelBufferRef;
    typedef CVImageBufferRef CVMetalTextureRef;
    // ...
    

    You noticed that CVBuffer is an abstract base class, but you may have missed that is CVImageBuffer is abstract too: it adds a few functions involving image dimensions and colour spaces and defines image attachment keys for access to image specific metadata.

    I assume your images are CGImages since you're using an AVAssetImageGenerator. At this point you have two choices. You can convert the CGImage -> CVPixelBuffer and append that directly to a AVAssetWriterInputPixelBufferAdaptor that you add to your AVAssetWriterInput. Or you can create a CMSampleBuffer from the CVPixelBuffer created above using CMSampleBufferCreateReadyWithImageBuffer and append that directly to your AVAssetWriterInput.

    Some people prefer the pixel buffer adaptor approach, but honestly, both of the above approaches are daunting and inefficient (e.g. I don't think you can create the CVPixelBuffer without copying the CGImage pixels), so why not dump the AVAssetImageGenerator and its unwelcome CGImages and use an AVAssetReader + AVAssetReaderOutput directly? It will vend CMSampleBuffers that you can append without conversion* to your writer input and you will have a better chance of not hating your life.

    * actually you may need to change the sample buffer's presentation time stamp, still pretty easy: CMSampleBufferCreateCopyWithNewTiming