Search code examples
javaopengl3dlibgdxlwjgl

Occlusion query using GL_ANY_SAMPLES_PASSED returning true when fragments are occluded


I am in the process of implementing a lens glow effect for my engine.

However, attempting to use an occlusion query only returns true when the fragments in question are completely occluded.

Perhaps the problem lies in that I am manually writing to the z-value of each vertex, since I am using a logarithmic depth buffer. However, I am not sure why this would affect occlusion testing.

Here are the relevant code snippets:

public class Query implements Disposable{
    private final int id;
    private final int type;

    private boolean inUse = false;

    public Query(int type){
        this.type = type;
        int[] arr = new int[1];
        Gdx.gl30.glGenQueries(1,arr,0);
        id = arr[0];
    }

    public void start(){
        Gdx.gl30.glBeginQuery(type, id);
        inUse = true;
    }

    public void end(){
        Gdx.gl30.glEndQuery(type);
    }

    public boolean isResultReady(){
        IntBuffer result = BufferUtils.newIntBuffer(1);
        Gdx.gl30.glGetQueryObjectuiv(id,Gdx.gl30.GL_QUERY_RESULT_AVAILABLE, result);
        return result.get(0) == Gdx.gl.GL_TRUE;
    }

    public int getResult(){
        inUse = false;
        IntBuffer result = BufferUtils.newIntBuffer(1);
        Gdx.gl30.glGetQueryObjectuiv(id, Gdx.gl30.GL_QUERY_RESULT, result);
        return result.get(0);
    }

    public boolean isInUse(){
        return inUse;
    }

    @Override
    public void dispose() {
        Gdx.gl30.glDeleteQueries(1, new int[]{id},0);
    }
}

Here is the method where I do the actual test:

private void doOcclusionTest(Camera cam){
        if(query.isResultReady()){
            int visibleSamples = query.getResult();
            System.out.println(visibleSamples);
        }


        temp4.set(cam.getPosition());
        temp4.sub(position);
        temp4.normalize();
        temp4.mul(getSize()*10);
        temp4.add(position);
        occlusionTestPoint.setPosition(temp4.x,temp4.y,temp4.z);


        if(!query.isInUse()) {
            query.start();
            Gdx.gl.glEnable(Gdx.gl.GL_DEPTH_TEST);
            occlusionTestPoint.render(renderer.getPointShader(), cam);
            query.end();
        }
    }

My vertex shader for a point, with logarithmic depth buffer calculations included:

#version 330 core
layout (location = 0) in vec3 aPos;

uniform mat4 modelView;
uniform mat4 projection;
uniform float og_farPlaneDistance;
uniform float u_logarithmicDepthConstant;

vec4 modelToClipCoordinates(vec4 position, mat4 modelViewPerspectiveMatrix, float depthConstant, float farPlaneDistance){
    vec4 clip = modelViewPerspectiveMatrix * position;

    clip.z = ((2.0 * log(depthConstant * clip.z + 1.0) / log(depthConstant * farPlaneDistance + 1.0)) - 1.0) * clip.w;
    return clip;
}

void main()
{
    gl_Position = modelToClipCoordinates(vec4(aPos, 1.0), projection * modelView, u_logarithmicDepthConstant, og_farPlaneDistance);
}

Fragment shader for a point:

#version 330 core

uniform vec4 color;

void main() {
    gl_FragColor = color;
}

Since I am just testing occlusion for a single point I know that the alternative would be to simply check the depth value of that pixel after everything is rendered. However, I am unsure of how I would calculate the logarithmic z-value of a point on the CPU.


Solution

  • I have found a solution to my problem. It is a workaround, only plausible for single points, not for entire models, but here it goes:

    Firstly, you must calculate the z-value of your point and the pixel coordinate where it lies. Calculating the z-value should be straight-forward, however in my case I was using a logarithmic depth buffer. For this reason, I had to make a few extra calculations for the z-value.

    Here is my method to get the coordinates in Normalized Device Coordinate, including z-value(temp4f can be any Vector4f):

    public Vector4f worldSpaceToDeviceCoords(Vector4f pos){
        temp4f.set(pos);
        Matrix4f projection = transformation.getProjectionMatrix(FOV, screenWidth,screenHeight,1f,MAXVIEWDISTANCE);
        Matrix4f view = transformation.getViewMatrix(camera);
        view.transform(temp4f); //Multiply the point vector by the view matrix
        projection.transform(temp4f); //Multiply the point vector by the projection matrix
    
    
        temp4f.x = ((temp4f.x / temp4f.w) + 1) / 2f; //Convert x coordinate to range between 0 to 1
        temp4f.y = ((temp4f.y / temp4f.w) + 1) / 2f; //Convert y coordinate to range between 0 to 1
    
        //Logarithmic depth buffer z-value calculation (Get rid of this if not using a logarithmic depth buffer)
        temp4f.z = ((2.0f * (float)Math.log(LOGDEPTHCONSTANT * temp4f.z + 1.0f) /
                (float)Math.log(LOGDEPTHCONSTANT * MAXVIEWDISTANCE + 1.0f)) - 1.0f) * temp4f.w;
    
        temp4f.z /= temp4f.w; //Perform perspective division on the z-value
        temp4f.z = (temp4f.z + 1)/2f; //Transform z coordinate into range 0 to 1
    
        return temp4f;
    }
    

    And this other method is used to get the coordinates of the pixel on the screen(temp2 is any Vector2f):

        public Vector2f projectPoint(Vector3f position){
        temp4f.set(worldSpaceToDeviceCoords(temp4f.set(position.x,position.y,position.z, 1)));
        temp4f.x*=screenWidth;
        temp4f.y*=screenHeight;
    
        //If the point is not visible, return null
        if (temp4f.w < 0){
            return null;
        }
    
    
        return temp2f.set(temp4f.x,temp4f.y);
    
    }
    

    Finally, a method to get the stored depth value at a given pixel(outBuff is any direct FloatBuffer):

    public float getFramebufferDepthComponent(int x, int y){
        Gdx.gl.glReadPixels(x,y,1,1,Gdx.gl.GL_DEPTH_COMPONENT,Gdx.gl.GL_FLOAT,outBuff);
        return outBuff.get(0);
    }
    

    So with these methods, what you need to do to find out if a certain point is occluded is this:

    1. Check at what pixel the point lies(second method)
    2. Retrieve the current stored z-value at that pixel(third method)
    3. Get the calculated z-value of the point(first method)
    4. If the calculated z-value is lower than the stored z-value, then the point is visible

    Please note that you should draw everything in the scene before sampling the depth buffer, otherwise the extracted depth buffer value will not reflect all that is rendered.