I'm reconstructing the fragment position from a depth texture to perform lighting calculations efficiently.
The problem is that in areas with no geometry (for example the floor, ceiling, and window in the image below) the fragment output colour is black as a result of the lighting calculations. I need these areas to be white so that a skybox can be visible.
How can I detect from within the shader if a pixel fragment has no original geometry at that location?
void main(void)
{
// Calculate texture co-ordinate
vec2 gScreenSize = vec2(screenWidth, screenHeight);
vec2 TexCoord = gl_FragCoord.xy / gScreenSize;
// Get the Fragment Z position (from the depth buffer)
float z = texture(MyTexture2, vec2(TexCoord.s, TexCoord.t)).x * 2.0f - 1.0f;
vec4 clipSpacePosition = vec4(vec2(TexCoord.s, TexCoord.t) * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = inverseProjectionMatrix * clipSpacePosition;
// Get the Fragment XYZ position (perspective division, via it's depth value)
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = inverseViewMatrix * viewSpacePosition;
vec3 fragPosition = worldSpacePosition.xyz;
// Lighting calulations
......
}
No doubt there are many solutions to this problem, I came up with this one.
Skybox pass: render the skybox into the Albedo texture and black into the Normal texture.
Geometry pass: render the remaining scene data over the skybox (Albedo and Normals)
Lighting pass: calculate lighting, and if the fragment normal is black, then output white.
Composite pass: multiply the Albedo texture with the Lighting texture.