Search code examples
unity-game-engineshaderdata-conversionhlsl

HLSL - asuint of a float seems to return the wrong value


I've been attempting to encode 4 uints (8-bit) into one float so that I can easily store them in a texture along with a depth value. My code wasn't working, and ultimately I found that the issue boiled down to this:

asuint(asfloat(uint(x))) returns 0 in most cases, when it should return x.

In theory, this code should return x (where x is a whole number) because the bits in x are being converted to float, then back to uint, so the same bits end up being interpreted as a uint again. However, I found that the only case where this function seems to return x is when the bits of x are interpreted as a very large float. I considered the possibility that this could be a graphics driver issue, so I tried it on two different computers and got the same issue on both.

I tested several other variations of this code, and all of these seem to work correctly.

asfloat(asuint(float(x))) = x

asuint(asint(uint(x))) = x

asuint(uint(x)) = x

The only case that does not work as intended is the first case mentioned in this post. Is this a bug, or am I doing something wrong? Also, this code is being run in a fragment shader inside of Unity.


Solution

  • After a long time of searching, I found some sort of answer, so I figured I would post it here just in case anyone else stumbles across this problem. The reason that this code does not work has something to do with float denormalization. (I don't completely understand it.) Anyway, denormalized floats were being interpreted as 0 by asuint so that asuint of a denormalized float would always be 0.

    A somewhat acceptable solution may be (asuint(asfloat(x | 1073741824)) & 3221225471) This ensures that the float is normalized, however it also erases any data stored in the second bit. If anyone has any other solutions that can preserve this bit, let me know!