Search code examples
javaopengllwjgl

OpenGL Depth Buffer has short range


With "OpenGL Depth Buffer has short range" I don't mean that the far plane is too close, its a problem with the depth buffer as a texture. If I look at the buffer it only shows Objects that are very close. I don't know how to explain it better just look at the pictures.

In the top right corner you can see the depth buffer. Images can be found here: https://i.sstatic.net/0jE2B.jpg

As you can see I have to get very close to see some darkness in the depth buffer.

Here is the code for creating the depth buffer for a FBO:

For a depth texture:

depthTexture = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, depthTexture);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL14.GL_DEPTH_COMPONENT24, width, height, 0, GL11.GL_DEPTH_COMPONENT, GL11.GL_FLOAT, (ByteBuffer) null);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL30.glFramebufferTexture2D(GL30.GL_FRAMEBUFFER, GL30.GL_DEPTH_ATTACHMENT, GL11.GL_TEXTURE_2D, depthTexture, 0);

For a depth buffer:

depthBuffer = GL30.glGenRenderbuffers();
GL30.glBindRenderbuffer(GL30.GL_RENDERBUFFER, depthBuffer);
GL30.glRenderbufferStorageMultisample(GL30.GL_RENDERBUFFER, multisampleing, GL14.GL_DEPTH_COMPONENT24, width, height);
GL30.glFramebufferRenderbuffer(GL30.GL_FRAMEBUFFER, GL30.GL_DEPTH_ATTACHMENT, GL30.GL_RENDERBUFFER, depthBuffer);

You can ignore the GLXX. at the beginning of the OpenGL Methods.

If you need more code just tell me.


Solution

  • As you can see I have to get very close to see some darkness in the depth buffer.

    That's just how the hyperbolic depth buffer works.

    Let's have a look at the projection matrix (I'm using the classical OpenGL conventions where camera looks along -z in eye space, and projection matrix flip from right handed to left handed space):

    .    0       .             0
    0    .       .             0
    0    0  -(f+n)/(f-n)  -2*f*n/(f-n)
    0    0      -1             0
    

    (the . symbolizes just some numbers we don't need to care about here).

    When you multiply that matrix with some eye space vector (x_eye, y_eye, z_eye, 1), you'll end up with

    x_clip = ...
    y_clip = ....
    z_clip =  [-(f+n)/(f-n) ] *z_eye + [-2*f*n/(f-n)] * 1
    w_clip = -z_eye
    

    After the perspective divide by w_clip, we end up with our z value in normalized device coordinates (ndc):

    z_ndc = z_clip / w_clip = (f+n)/(f-n) + 2*f*n/[(f-n)*z_eye]
    

    Finally, the glDepthRange is applied to reach window space z. The default is to go from [-1,1] to [0,1], so let's do that here:

    z_win = 0.5 * z_ndc + 0.5 = 0.5*(f+n)/(f-n)  + f*n/[(f-n)*z_eye] + 0.5
    

    This is obviously a function of z_eye, and also of f and n. Let's assume you are using a near plane at distance 1 and a far plane at distance 1001, so this will evaluate to:

    z_win(z_eye) = 1002/2000 + 1001/(1000 *z_eye) + 0.5 = 1001/1000 + 1001/(1000 * z_eye)
    

    So, let's check what we got so far:

    z_win(-1) = 0.  a point on the near plane ends up as 0 in the depth buffer
    z_win(-1001) = 1.  a point on the far plane ends up as 1 in the depth buffer
    

    This shouldn't surprise us, as this is per construction. But what happens to the points in between:

    z_win(-50)  = 1001/1000 - 1001/50000  = 0.98098
    z_win(-100) = 1001/1000 - 1001/100000 = 0.99099
    z_win(-250) = 1001/1000 - 1001/250000 = 0.996996
    z_win(-500) = 1001/1000 - 1001/500000 = 0.998998
    z_win(-750) = 1001/1000 - 1001/750000 = 0.999665333
    

    So, as you see, any object farer away than 100 units in eye space will end up with a depth buffer value > 0.99.

    To put it the other way around, we can just calculate the eye space z for a point which will get 0.5 in the depth buffer:

    z_eye(z_win) = 1001/(1000*z_win -1001)
    z_eye(0.5) = -1.99800399
    

    Yes, that's right, with a frustum from 1 to 1001 units away, only the range from one to two units in front of the camera will be mapped to the first half of the depth buffer range, and the 998 units after that are stuffed into the second half.

    So, if you try to visualize the depth buffer just as colors, you won't see anything but the closest parts. With 8 bit color, everything above 254/255 = .996 will be fully saturated (which would be ~200 units in my example), and even below that, the differences will be so minute to be hardly visible at all.

    If you want to just visualize the depth buffer, you should invert the hyperbolic distortion, and visualize linear (=eye space) depth.