if you check this doc from apple
In the very second line , it says that : "A CVBuffer object can hold video, audio, or possibly some other type of data. You can use the CVBuffer programming interface on any Core Video buffer." This means that it can hold images as well . If yes , then why do we have CVImageBuffer.
I have just done some work on generating images from a video using AVAssetImageGenerator.Now I want to merge frames to make a video.SO,I have just started reading about this.
My current status:
1.Well right now I know that I need to use AVAssetWriter.
2.Then I need to provide it an input using AVAssetWriterInput.
3.I need to use some CV and CG classes.
So please help me know the reason for using CVImageBuffer if we have CVBuffer.I know that CVBuffer is abstract but then CVImageBuffer doesn't inherit from CVBuffer.This bamboozles me even more.
CVImageBuffer
does inherit from CVBuffer
, but only in that "simulated object orientation in c" way. That is, if you know that a CVBuffer
's type is that of a certain subclass, then you can safely cast to that type, e.g.:
if (CFGetTypeID(myCVBuffer) == CVMetalTextureGetTypeID()) {
CVMetalTextureRef metalBuffer = myCVBuffer;
// do something with metalBuffer
}
In fact you don't even need to cast (not even in swift!), as the CVBuffer
types are all the same (typealias
es in swift):
typedef CVBufferRef CVImageBufferRef;
typedef CVImageBufferRef CVPixelBufferRef;
typedef CVImageBufferRef CVMetalTextureRef;
// ...
You noticed that CVBuffer
is an abstract base class, but you may have missed that is CVImageBuffer
is abstract too: it adds a few functions involving image dimensions and colour spaces and defines image attachment keys for access to image specific metadata.
I assume your images are CGImage
s since you're using an AVAssetImageGenerator
. At this point you have two choices. You can convert the CGImage
-> CVPixelBuffer
and append that directly to a AVAssetWriterInputPixelBufferAdaptor
that you add to your AVAssetWriterInput
. Or you can create a CMSampleBuffer
from the CVPixelBuffer
created above using CMSampleBufferCreateReadyWithImageBuffer
and append that directly to your AVAssetWriterInput
.
Some people prefer the pixel buffer adaptor approach, but honestly, both of the above approaches are daunting and inefficient (e.g. I don't think you can create the CVPixelBuffer
without copying the CGImage
pixels), so why not dump the AVAssetImageGenerator
and its unwelcome CGImage
s and use an AVAssetReader
+ AVAssetReaderOutput
directly? It will vend CMSampleBuffer
s that you can append without conversion* to your writer input and you will have a better chance of not hating your life.
* actually you may need to change the sample buffer's presentation time stamp, still pretty easy: CMSampleBufferCreateCopyWithNewTiming