Search code examples
opengl-4

deferred shading and background color


I implemented a simplified deferred shading (I don't calculate boundaries for point lights) where after filling g-buffer I just use a full screen quad for compute lighting. I use blending ( glBlendFunc(GL_ONE, GL_ONE)) during a second stage for handling multiple lights. Then rgb values in glClearColor should be set to 0, for correct results. When filling g-buffer glClearColor can be any color (some colors might change only background color in a final image). Now I wonder how should I set background color in the final image. One way to do this is use glClearColor(0,0,0) during filling g-buffer and then use follwing if-statement in the fragment shader:

if((normal.x == 0.0) && (normal.y == 0.0) && (normal.z == 0.0))
{
    fragColor = vec4(1, 0, 0, 1); // here we can set a background color
}
else
{
    fragColor = computeLighting(worldPos, normal, diffM, specM, specMS);
} 

It works fine, but if-statement might cause some performance penalty. Is this the only way to set a background color ?


Solution

  • Not sure if I understand what the problem is, but here are some thoughts with a lot of assumptions.

    You are thinking about doing something like this? :

    • Clear your diffuse attachment in your gbuffer with the background color you want (Howevery.. you dont want to clear your other textures with this value such as normals!)
    • After filling your gbuffer, anything not covered by your geometry should still have the background color (in your diffuse attachment)
    • If the normal is not defined for the fragment, you manually write a hard coded color in the last stage.

    I assume you renderer is doing the following :

    • Fill gbuffer (diffuse, normals, depth etc in an FBO with several attachments)
    • For each light you render a fullscreen quad additive blending to a separate FBO (light accumulation buffer).
    • Then finally you combine the diffuse attachment in your gbuffer with the light accumulation buffer to render the end result to your screen.

    There's really no reason your shader should be responsible for writing the background color like this. I would also actually render what is in the background and always clear gbuffer with 0 values. I guess things can go wrong when you combine diffuse and light in the last stage, so it might be simpler to go for the stencil approach explained further down. Personally I store material index in the alpha channel of the diffuse color, then upload all the material properties in a texture.

    In my materials I have two scalars (and more...) :

    • AmbientWeight
    • LightWeight

    When combining the diffuse buffer and light buffer (greatly simplified) :

    FinalColor = Diffuse * AmbientWeight + Diffuse * Light * LightWeight
    

    If your background are using material 0 with AmbientWeight = 1 and LightWeight = 0, the FinalColor will always be the original value in the diffuse buffer.

    Many simple deferred renderers just calculate the end result this way :

    FinalColor = Diffuse * Light (Fragment from diffuse buffer * fragment from light buffer)

    In your case, this will of course cause your background color to disappear since those fragments will never be lit. (Diffuse * 0 is always the outcome) You could use the alpha channel in the diffuse buffer as the AmbientWeight for some quick results.

    FinalColor = Diffuse * Diffuse.a + Diffuse * Light
    

    When it comes to performance :

    This is really hard predict. Skipping the final light calculations in the shader might give you something, but you have already done all the gbuffer reads and unpacking before you reach this stage. No matter what the shader returns, you will end up affecting the entire light buffer with your blend operation and you read the entire gbuffer per light. Checking if all components in the normal buffer is 0 will only trigger for areas without geometry. When using a fullscreen quad per light, you will have quite a few bottlenecks.

    Starting by reading the position buffer (or reconstructing position from your depth buffer) then determine if your point light cannot reach the fragment and discard it before you do anything else might help a bit. For smaller lights you will not end up reading everything from the gbuffer per fragment. It really depends on how fat your gbuffer is, what you are rendering, how large your lights are and how many lights you are rendering.

    Dynamic branching can also kill performance, but can sometimes be a "lesser evil". I avoid it as much as possible.

    Extra :

    When it comes to "background color" I personally use the stencil buffer to fill the background with a skybox or similar. Build a stencil mask when writing your diffuse buffer, when render the background with the inverse mask so only background fragments are affected (without depth testing or depth write). If my entire scene was covered with geometry there would be no fragments written. This assumes that you are writing the end result to a 3rd FBO that is using the same depth attachment as your gbuffer. (Depth24 + Stencil8 buffer)

    Instead of drawing each light using a fullscreen quad (with blending), you could also send in arrays using UBOs with light information. Then draw all point light with one fullscreen quad. You end up doing the same amount of light calculations, but the amount of reads writes is always constant. (UBOs still have a size limit though)

    You might also find Tiled Deferreed Shading of interest as a potential next step : http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=tiled_shading (You only read from the gbuffer once and only write one fragment for the light pass)

    Paper : http://www.cse.chalmers.se/~uffe/tiled_shading_preprint.pdf