Search code examples
directxgpgpuhlslgpucompute-shader

Compute Shaders: Why return float4?


Edited question:

I have a ID3D11Texture2D and ID3D11UnorderedAccessView with format DXGI_FORMAT_R8G8B8A8_UNORM and shader

RWTexture2d<float4> tex: register(u0);

[numthreads(32, 32, 1)]
void main(uint3 dtid : sv_dispatchthreadid)
{
    float r;
    ...
    tex[dtid.xy] = float4(r, 0.0f, 0.0f, 0.0f);
}

I assume we must be writing 8-bit unorm floats.

Will this mean there is a type conversion from 32-bit floats to 8-bit unorms?

Original question:

I originally tried the shader

RWTexture2D<uint> tex: register(u0);

[numthreads(32, 32, 1)]
void main(uint3 DTid : SV_DispatchThreadID)
{
    tex[DTid.xy] = 0xFF0000FF;// I actually tried and got white with 0xFF << 24 + 0xFF but I realized they are not the same.
}

but when I realized was getting the error:

The resource return type for component 0 declared in the shader code (UINT) is not compatible with the resource type bound to Unordered Access View slot 0 of the Compute Shader unit (UNORM).

though unorm was incompatible with uint, and changed my question. But Chuck's answer made me realize I was wrong to do so.

How do I write a shader that sets the bits in memory without any type conversions?


Solution

  • A DXGI_FORMAT_R8G8B8A8_UNORM shader is suppose to return a float4.

    I really wasn't expecting this. However, if you consider that GPU are designed for 32 bit computations and the display only has 32 bits backing each pixel, it makes to have a built in converter from float4 to DXGI_FORMAT_R8G8B8A8.

    This has also tripped others up:

    Why pixel shader returns float4 when the back buffer format is DXGI_FORMAT_B8G8R8A8_UNORM?