Search code examples
shaderdirectxhlsldepth-buffer

How to do my own depth buffering in an HLSL shader?


I'm making a game in Unity using HLSL and DirectX. In my game there are some cases where there will be a large triangle whose vertices are close to the camera in the forward direction but far away in the sideways and vertical directions. The effect of this is that the fragments of that triangle are very close to you and cover much of your view. I want those triangles to appear behind other triangles. My idea for how to achieve this is instead of depth buffering using the z coordinates of the vertices -- which I believe is what DirectX does by default -- is I calculate the distance of each vertex from the camera, interpolate that distance as usual, and use that for depth buffering instead. This way if a triangle is right in my face but the vertices are far away, then the fragments in the middle of the triangle will also be far away.

Only when I try this it doesn't work. The triangles seem to overlap in a random order. Here's a sketch of my code:

            struct appdata
            {
                ...
            };

            struct v2f
            {
                ... // other members
                float4 vertex : SV_Position;
                float distance : COLOR1;
            };

            struct target
            {
                fixed4 col : SV_Target;
                float depth : SV_Depth;
            };

            v2f vert (appdata v)
            {
                v2f o;

                // math math math
                // here we calculate the position in camera space `tpos`,
                // and `dis` is the length of `tpos`
                // UNITY_MATRIX_P is the built in projection matrix

                o.vertex = mul(UNITY_MATRIX_P, float4(tpos, 1.0f)); // projected space
                o.distance = dis;

                return o;
            }

            target frag (v2f i)
            {
                target o;
                o.depth = i.distance;
                o.col = _Color * i.bright;
                return o;
            }

I've also tried setting o.distance = -dis instead, but that doesn't work either. I've also tried setting o.distance = tpos.z as well as -tpos.z and those also didn't work, which confuses me because I thought that's essentially what the rasterizer does by default. To be clear when I say these don't work, the shader runs but the choice of which triangle is closer is seemingly random. So does anyone know what's going wrong, or what's the correct way to do this?

Also, sorry for the lack of a minimal working example, my shader depends too much on my other code. Hopefully my question is conceptual enough that this is okay.


Solution

  • Your interpretation of what the hardware does by default is wrong. The 'depth' value the rasterizer would use by default is the per-fragment interpolated value of o.vertex.z / o.vertex.w, which is the interpolated, normalized fragment depth value in screen space (also called projection space). This is equal to the distance of the fragment to the camera, noramlized according to the specified near and far plane in the projection matrix, and taking the nonlinear scaling of perspective projection into account.

    That aside, the reason why your approach doesn't work is because depth buffers expect values in the range of 0 to 1 (or -1 to 1 in case of OpenGL), and depth values outside that range will be clamped. Since both dis and tpos.z are in view space, which is just a rotated and translated world space, they will probably be way larger then 1. Therefore, as far as the depth buffer is concerned, all your geometry has a depth of exactly 1 (since it's clamped), which is similar to not using a depth buffer in the first place.