I am using the below code to make a video from a static 16:9 image using AVAsset writer. The problem is that for some reason the video that is produced is in a 4:3 format.
Can anyone suggest a way that I can either amend the code to produce a 16:9 video, or alternatively, how I can convert the 4:3 video to 16:9.
Thank you
- (void) createVideoFromStillImage
{
//Set the size according to the device type (iPhone or iPad).
CGSize size = CGSizeMake(screenWidth, screenHeight);
NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/IntroVideo.mov"];
NSError *error = nil;
unlink([betaCompressionDirectory UTF8String]);
//----initialize compression engine
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
if(error)
NSLog(@"error = %@", [error localizedDescription]);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264,AVVideoCodecKey,
[NSNumber numberWithInt:size.height], AVVideoWidthKey,
[NSNumber numberWithInt:size.width], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
if ([videoWriter canAddInput:writerInput])
NSLog(@"I can add this input");
else
NSLog(@"i can't add this input");
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//CGImageRef theImage = [finishedMergedImage CGImage];
CGImageRef theImage = [introImage CGImage];
//dispatch_queue_t dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
int __block frame = 0;
//Calculate how much progress % one frame completion represents. Maximum of 75%.
float currentProgress = 0.0;
float progress = (80.0 / kDurationOfIntroOutro);
//NSLog(@"Progress is %f", progress);
for (int i=0; i<=kDurationOfIntroOutro; i++) {
//Update our progress view for every frame that is generated.
[self updateProgressView:currentProgress];
currentProgress +=progress;
//NSLog(@"CurrentProgress is %f", currentProgress);
frame++;
[NSThread sleepForTimeInterval:0.05]; //Delay to allow buffer to be ready.
CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size];
if (buffer) {
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)])
NSLog(@"FAIL");
else
NSLog(@"Success:%d", frame);
CFRelease(buffer);
}
}
}
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
//NSLog(@"outside for loop");
//Grab the URL for the video so we can use it later.
NSURL * url = [self applicationDocumentsDirectory : kIntroVideoFileName];
[assetURLArray setObject:url forKey:kIntroVideo];
}
- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
So that this can be closed out, I'll restate what I did above. The videoSettings
dictionary you use should be using the target dimensions of your video, but you're passing in the dimensions of your view. Unless that's what you want to record, you'll need to change the values passed in for the AVVideoWidthKey
and AVVideoWidthKey
to be the correct output sizes.
Given that the iOS device screens have aspect ratios close to 4:3, this is what probably was leading to that ratio on the recorded video.