Search code examples
webglfbo

Using FBO with multiple color attachments


They say there are no dumb questions, however, here is my shot for one:-)

Here’s what I try to accomplish: consider a 3D designer with a large scene. For performance reasons, I’d like to split the render operations into two blocks: 1) the scene that is within the current viewing frustrum (or cube, with an orthogonal projection) and 2) the edit operation (cursor, rubber-banding, selection highlighting, helper lines, etc.). The point is while editing, most of the operations are 2) and only rendering them speeds up performance because only a few objects make up the “editing scene” whereas the entire scene (1) may consist of millions of primitives.

So far so good, I think a FBO is the choice here – load them up with render buffers for two color attachments and whatever is required for depth and stencil. So, I’d render 1) only if the scene does change (when an edit operation is committed, for example) and keep the texture/render buffer within the FBO otherwise unchanged. Before I start probing around, I was just thinking some of you may help me with those questions:

  1. I’ve read that render buffers are less of a hassle the textures – so, can I make the color attachments as for render buffers or does it have to be a texture?
  2. Consider I have the results in the frame buffer – how are the color attachments merged together – is this a feature of the frame buffer or do I have to do this myself in some post-processing step?
  3. As far as I understand, an FBO can only have one depth/stencil buffer – since 2) is always drawn over everything, I just switch off the depth test, when rendering 2) – does this make sense?
  4. Finally, how do I get this into the canvas? When a FBO is made active, does the first color attachment always go out to the monitor or is there a “default FBO” that is exclusively connected to the graphics device? Meaning, I need to unbind “my FBO”, unbind the textures and render them into the “default FBO”?

I very much appreciate you folk’s thoughts on that!

The environment I’m targeting is WebGL and WPF. The latter already makes use of a FBO with a DX created texture that is finally copied into the D3DImage control at WPF.


Solution

    1. You can not efficiently read from render buffers, hence you have to attach textures
    2. Framebuffer attachments are not merged, their purpose is to output more data per draw call per pixel
    3. It would make sense if you were to compose right over, and into your scene framebuffer, but you don't want that to keep scene rendering asynchronous from overlay rendering
    4. There's a default "screen" framebuffer, you configure it with the options on context creation (namely alpha, depth and stencil)

    So you see, in your scenario having multiple attachments doesn't make sense, having multiple framebuffers does though. The rendering loop should look something like this:

    if (sceneUpdate) {
      gl.bindFramebuffer(gl.FRAMEBUFFER, sceneBuffer);
      renderScene(); // render complex 3d scene
    }
    if (editUpdate) {
      gl.bindFramebuffer(gl.FRAMEBUFFER, overlayBuffer);
      drawOverlays(); // render simple overlays
    }
    /* present */
    // switch back to the screen framebuffer
    gl.bindFramebuffer(gl.FRAMEBUFFER, null);
    // bind color attachment of scene framebuffer
    gl.activeTexture(gl.TEXTURE0);
    gl.bindTexture(gl.TEXTURE_2D, sceneBufferColorAttachmentTexture);
    // bind color attachment of overlay framebuffer
    gl.activeTexture(gl.TEXTURE1);
    gl.bindTexture(gl.TEXTURE_2D, overlayBufferColorAttachmentTexture);
    // draw a screen space rectangle blending the two textures together
    compose();