I have an application where I need to do the following:
I've got that much working.
Next, I'd like to be able to move step #2 to a separate shared GL context.
On initialization, I create a shared context:
rootContext = CGLGetCurrentContext();
CGLPixelFormatObj pf = CGLGetPixelFormat(rootContext);
CGLCreateContext(pf, rootContext, &childContext);
...then make it current and set up the framebuffer on it...
CGLSetCurrentContext(childContext);
glGenTextures(1, &childTexture);
glBindTexture(GL_TEXTURE_2D, childTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.width(), image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffers(1, &childFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, childFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, childTexture, 0);
Then, when it's time to render each frame, I make childContext
current and render to it:
CGLSetCurrentContext(childContext);
glBindFramebuffer(GL_FRAMEBUFFER, childFramebuffer);
glUseProgram(childProgram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, inputTexture);
glUniform1i(childTextureUniform, 0);
glBindBuffer(GL_ARRAY_BUFFER, childQuadPositionBuffer);
glVertexAttribPointer(childPositionAttribute, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*2, (void*)0);
glEnableVertexAttribArray(childPositionAttribute);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, childQuadElementBuffer);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, (void*)0);
glDisableVertexAttribArray(childPositionAttribute);
glUseProgram(0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
...then I make rootContext
current and render the FBO's texture to screen:
CGLSetCurrentContext(rootContext);
glUseProgram(rootProgram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, childTexture); // This texture was created and populated on childContext.
glUniform1i(rootTextureUniform, 0);
glBindBuffer(GL_ARRAY_BUFFER, rootQuadPositionBuffer);
glVertexAttribPointer(rootPositionAttribute, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*2, (void*)0);
glEnableVertexAttribArray(rootPositionAttribute);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, rootQuadElementBuffer);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, (void*)0);
glDisableVertexAttribArray(rootPositionAttribute);
glUseProgram(0);
This works perfectly if I comment out the CGLSetCurrentContext(childContext);
calls.
But if I switch to the shared context to render to the FBO, I see glitchy garbage rendered onscreen, as though nothing ever gets rendered onto childTexture
:
...which is really cool, but I'm going for a more of a realist aesthetic here.
Any ideas about how I can get this working when using shared contexts?
CGLError
s when creating the shared context — there are no errors; it's created successfully.glGetError()
— there are no errors.glCheckFramebufferStatus()
— it's GL_FRAMEBUFFER_COMPLETE
.glBindTexture(GL_TEXTURE_2D, 0);
after each glDisableVertexAttribArray()
call, as suggested by @datenwolf. No change.I created 2 simple test apps which both exhibit the same problem:
I got it working, with 2 changes to the sample code above:
glViewport()
on childContext once after creating itglFlushRenderAPPLE();
after rendering to the FBO each frame