Search code examples
cgcontextavassetwriter

AVAssetWriter getting raw bytes makes corrupt videos on device (works on sim)


So my goal is to add CVPixelBuffers into my AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor with super high speed. My previous solution used CGContextDrawImage but it is very slow (0.1s) to draw. The reason seems to be with color matching and converting, but that's another question I think.

My current solution is trying to read the bytes of the image directly to skip the draw call. I do this:

CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
CGDataProviderRef dataProvider = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(dataProvider);
CFDataRef da = CGDataProviderCopyData(dataProvider);
CVPixelBufferCreateWithBytes(NULL,
                             CGImageGetWidth(cgImageRef),
                             CGImageGetHeight(cgImageRef),
                             kCVPixelFormatType_32BGRA,
                             (void*)CFDataGetBytePtr(da),
                             CGImageGetBytesPerRow(cgImageRef),
                             NULL,
                             0,
                             NULL,
                             &pixelBuffer);
[writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentTime];    
-- releases here --

This works fine on my simulator and inside an app. But when I run the code inside the SpringBoard process, it comes out as the images below. Running it outside the sandbox is a requirement, it is meant for jailbroken devices.

I have tried to play around with e.g. pixel format styles but it mostly comes out with differently corrupted images.

The proper image/video file looks fine: enter image description here But this is what I get in the broken state: enter image description here enter image description here


Solution

  • Answering my own question as I think I got the answer(s). The resolution difference was a simple code error, not using the device bounds on the latter ones.

    The color issues. In short, the CGImages I got when running outside of the sandbox was using more bytes per pixel, 8 bytes. The images I get when running inside the sandbox was 4 bytes. So basically I was simply writing the wrong data into the buffer.

    So, instead of simply slapping all of the bytes from the larger image into the smaller buffer. I loop through the pixel buffer row-by-row, byte-by-byte and I pick the RGBA values for each pixel. I essentially had to skip every other byte from the source image to get the right data into the right place within the buffer.