Search code examples
iosprecisionmetal

Why do I have big precision errors on GPU?


I am doing a series of calculations on GPU that requires a good enough precision, but it seems I am getting a much lower precision than when using float on CPU.

For starters, when I load a value of 0.01 in a float buffer, it gets loaded as 0.009995 in the shader. Why is that? I would think 0.01 is a value in range for float vectors (using the simd library available for Metal).

Then, when doing a simple operation like this, the precision gets visibly worse:

simd::float4 p = simd::float4 { -0.04, -0.07, 0, 1 };
simd::float4 v = myMatrix * p;
v *= 1.0 / v.w;

p in the example is what I expect and use in the CPU test; on the GPU it is calculated as { -0.039978, -0.069946, 0.0, 1.0 }, with one integer subtraction and one float multiplication by the already wrong 0.009995.

What I would expect to get from v is { -0.010627, 0.006991, -0.034100 } (calculated with the simd library on CPU, already worse precision than using doubles, { -0.010613, 0.006982, -0.034056 }, but bearable).

What I get instead is { -0.010483, 0.006405, -0.044067 }. This gets much worse with subsequent operations and the result becomes quickly unusable.

Why is the result so different even if using the same precision and why float data is not loaded 1:1? I tried disabling the fast math option for Metal, but it didn't change anything.


Solution

  • Alas, it was not a precision issue, as the way I setup the test wasn't correct, so the GPU wasn't using the numbers I actually thought it used.