Search code examples
c#shadowmonogamehlsldeferred

(Monogame/HLSL) Problems with ShadowMapping - Shadow dependent on Camera position


I'm banging my head at this problem for quite a while now and finally realized that i need serious help...

So basically i wanted to implement proper shadows into my project im writing in Monogame. For this I wrote a deferred dhader in HLSL using multiple tutorials, mainly for old XNA. The Problem is, that although my lighting and shadow work for a spotlight, the light on the floor of my scene is very dependent on my camera, as you can see in the images: https://i.sstatic.net/TkDdK.jpg

I tried many different things to solve this problem:

  1. A bigger DepthBias will widen the radius that is "shadow free" with massive peter panning and the described issue is not fixed at all.
  2. One paper suggested using an exponential shadow map, but i didn't like the results at all, as the light bleeding was unbearable and smaller shadows (like the one behind the torch at the wall) would not get rendered.
  3. I switched my GBuffer DepthMap to 1-z/w to get more precision, but that did not fix the problem either.

I am using a

new RenderTarget2D(device,
                Width, Height, false, SurfaceFormat.Vector2, DepthFormat.Depth24Stencil8)

to store the depth from the lights perspective.

I Calculate the Shadow using this PixelShader Function: Note, that i want to adapt this shader into a point light in the future - thats why im simply using length(LightPos - PixelPos). SpotLight.fx - PixelShader

float4 PS(VSO input) : SV_TARGET0
{

    // Fancy Lighting equations

input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);

// Sample Depth from DepthMap
float Depth = DepthMap.Sample(SampleTypeClamp, UV).x;

// Getting the PixelPosition in WorldSpace
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;

// Transform Position to WorldSpace
Position = mul(Position, InverseViewProjection);
Position /= Position.w;

float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;


// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LightUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightDepth = ShadowMap.Sample(SampleDot, LightUV).r;

// Linear depth model
float closestDepth = lightDepth * LightFarplane; // Depth is stored in [0, 1]; bring it to [0, farplane]
float currentDepth = length(LightPosition.xyz - Position.xyz) - DepthBias;
ShadowFactor = step(currentDepth, closestDepth); // closestDepth > currentDepth -> Occluded, Shadow.

float4 phong = Phong(...); 
return ShadowFactor * phong;
}

LightViewProjection is simply light.View * light.Projection InverseViewProjection is Matrix.Invert(camera.View * Camera.Projection) Phong() is a function i call to finalize the lighting The lightDepthMap simply stores length(lightPos - Position)

I'd like to have that artifact shown in the pictures gone to be able to adapt the code to point lights, as well.

Could this be a problem with the way i retrieve the world position from screen space and my depth got a to low resolution?

Help is much appreciated!

--- Update ---

I changed my Lighting shader to display the difference between the distance stored in the shadowMap and the distance calculated on the spot in the Pixelshader:

float4 PixelShaderFct(...) : SV_TARGET0
{
    // Get Depth from Texture

    float4 Position = 1.0f;
    Position.xy = input.ScreenPosition.xy;
    Position.z = Depth;
    Position = mul(Position, InverseViewProjection);
    Position /= Position.w; 

    float4 LightScreenPos = mul(Position, LightViewProjection);
    LightScreenPos /= LightScreenPos.w; 

    // Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
    float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
    float lightZ = ShadowMap.Sample(SampleDot, LUV).r;

    float Attenuation = AttenuationMap.Sample(SampleType, LUV).r;

    float ShadowFactor = 1;
    // Linear depth model; lightZ stores (LightPos - Pos)/LightFarPlane
    float closestDepth = lightZ * LightFarPlane;
    float currentDepth = length(LightPosition.xyz - Position.xyz) -DepthBias;
    return (closestDepth - currentDepth);
}

As I am basically outputting Length - (Length - Bias) one would expect to have an Image with "DepthBias" as its color. But that is not the result I'm getting here:

https://i.sstatic.net/TQNIF.jpg

Based on this result, I'm assuming that either i've got precision issues (which i find weird, given that im working with near- and farplanes of [0.1, 50]), or something is wrong with the way im recovering the worldPosition of a given pixel from my DepthMap.


Solution

  • I finally found the solution and I'm posting it here if someone stumbles across the same issue:

    The Tutorial I used was for XNA / DX9. But as im targetting DX10+ a tiny change needs to be done:

    In XNA / DX9 the UV coordinates are not align with the actual pixels and need to be aligned. That is what - float2(1.0f / GBufferTextureSize.xy); in float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy); was for. This is NOT needed in DX10 and above and will result in the issue i had.

    Solution:

    UV Coordinates for a Fullscreen Quad:
    For XNA / DX9:

    input.ScreenPosition.xy /= input.ScreenPosition.w;
    float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
    

    For Monogame / DX10+

    input.ScreenPosition.xy /= input.ScreenPosition.w;
    float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1)