Search code examples
floating-pointdirectx-11hlslpixel-shader

How do pixel values behave in DirectX 11 HLSL shaders?


When sampling texel values in a pixel shader, the sampler always returns a float4. However, the texture itself may contain any of a wide number of formats based on DXGI_FORMAT. It seems fairly straight-forward that any of the _UNORM formats will ensure that all of the values in that float4 will be between 0 and 1. Back in the DirectX9 days, it was pretty much assumed that, regardless of the pixel format, all values sampled would always be between 0 and 1.

This does not seem to be the case with DirectX 11. A texture using the DXGI_FORMAT_R32_FLOAT format, for example, seems to be able to store any valid 32bit float, which does make sense from a general perspective because you may not be using that texture (or buffer) for rendering at all.

So how does the rendering pipeline figure out what pixel value is output when you have such an arbitrary range for something like R32_FLOAT, if it is not using the 0 to 1 range? It doesn't seem to be -FLT_MAX to +FLT_MAX as I can render a texture of this type using values between 0.0-65.0 and I do see red in the final result. But debugging the pixel shader and looking at that source texture, only values that get really close to 65.0 actually show as red. The final rendered result on the back buffer, though, has lots of red in it.

Here is a sample source texture, as show in VS graphics debugger:

Source R32_Float Texture

If I render it to the screen just using a basic sampler output for the pixel shader, I get this: Resulting Image

The back-buffer format was R10G10B10A10_UNORM.

So how does it decide what "maximum intensity" is for a floating point texture? Similarly, if you used one of the _SINT formats, how does it deal with that?


Solution

    • If you sample a texture, the returned value is not clamped to any specific range. If the texture format can contain values that are outside the 0..1 range, the Sample/Load methods will return those values unchanged
    • When your pixel shader returns a value, that returned value will be clamped to whatever range your render target supports. If you render on a R32_FLOAT, your rendered values will not be clamped, and the 'valid range' is indeed -FLT_MAX to +FLT_MAX. The VS Graphic Debugger then just displays the texture in a range based on the lowest and highest values that are actually stored in the texture, so you can view the texture in the debugger in a meaningfull way. Your Back-Buffer format, on the other hand, uses a UNORM format, so it only can contain values in the 0..1 range, meaning that every value greater than 1 will be clamped to be 1, which is why so much on your screen is red.