I am writing an iOS video applicaion using OpenGL ES 2.0 to do image processing.
My input and output format of the video is YUV 4:2:0 which is the native pixel format for most devices after the iPhone 3GS. For the A5 processor and higher I simply create a luma texture and a chroma texture and I attach them to the offscreen framebuffer. I create my texutre as following:
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
(int)CVPixelBufferGetWidthOfPlane(pixelBuffer, 0),
(int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 0),
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&lumaTexture);
and then I attach it to the program like:
glActiveTexture([self getTextureUnit:textureUnit]);
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
if(uniform != -1)
{
glUniform1i(uniforms[uniform], textureUnit);
}
In my shader I can then simply do:
gl_FragColor.r = texture2D(SamplerY, textureRead).r;
to assign a luma value to the buffer and save the resulting video frame to disk.
Unfortunately I am running into problems for the iPhone 4 as this isn't using the A5 processor and consequently the GL_RED_EXT
isn't supported.
I have then tried to figure out a way to write to a 1 channel luma buffer in OpenGL ES, but keep running into problems. I tried simply changing the GL_RED_EXT
to GL_LUMINANCE
but found out that it isn't possible to write to GL_LUMINANCE
.
I then tried registering a color attachment and a depth attachment as:
GLuint colorRenderbuffer;
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES, (int)CVPixelBufferGetWidthOfPlane(renderData.destinationPixelBuffer, 0), (int)CVPixelBufferGetHeightOfPlane(renderData.destinationPixelBuffer, 0));
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, colorRenderbuffer);
GLuint depthRenderbuffer;
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, (int)CVPixelBufferGetWidthOfPlane(renderData.destinationPixelBuffer, 0), (int)CVPixelBufferGetHeightOfPlane(renderData.destinationPixelBuffer, 0));
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderbuffer);
Writing to the depth buffer in my fragment shader:
gl_FragDepth.z = texture2D(SamplerY, textureRead).r;
And then writing the result to the pixel buffer as:
glReadPixels(0, 0, (int)CVPixelBufferGetWidthOfPlane(renderData.destinationPixelBuffer, 0), (int)CVPixelBufferGetHeightOfPlane(renderData.destinationPixelBuffer, 0), GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(renderData.destinationPixelBuffer, 0));
But again I read in the specs that OpenGL ES 2.0 does not support writing directly to the depth buffer.
So I am left with no obvious way to create a single channel color attachment and I am not sure how I could write to a RGB color attachment and only copy one channel to my pixel buffer.
Sorry for the long post, just wanted to give as much information as possible.
Any ideas?
So how I ended up solving this was to use GL_LUMINANCE and GL_LUMINANCE_ALPHA instead of RED and RG if the extensions aren't available. I would then add a step after my render passes to use glReadPixels to manually copy the pixels into the expected format of my preallocated pixelbuffers.