I collect a depthbuffer from a scene. I notice however that when the camera and the scene's objects stay at the same position and the camera only rotates, the depthbuffer collects different results, e.g. an object shown on the side of the screen has different depth than when shown in the middle. This might be a feature of OpenGL, but even then, how can I correct for this? What to take in account?
I linearize my depth with the following function:
float linearize(float depth) {
float zNear = 0.1;
float zFar = 100.0;
return (2.0 * zNear) / (zFar + zNear - depth * (zFar - zNear));
}
I checked the near and far values they're supposed to be okay.
Yes, this is to be expected. The perspective near plane is kind of the projection surface and the depth value is the distance of the point perpendicular to the depth plane. Now think about what happens if you rotate your "camera" (of course there's no camera in OpenGL): The perpendicular distance will vary by the sine of the rotation angle with some phase shift added.
If you want to actually measure the distance of the of the pixels to the viewpoint, then the best course of action would be not to use the depth buffer, but write a vertex shader, that passes the vertex distance to origin (builtin function length
) as scalar to the fragment shader, and the fragment shader writes this into a single channel framebuffer object output.