Search code examples
androidopengl-esglslglsles

GLSL compiler optimizations lead to incorrect behavior with floating point operations


I am part of a team writing an Android application using OpenGL. We have a good bit of shader code emulating double-precision math using floats. (Specifically we implemented the algorithms in Andrew Thall's Extended-Precision Floating-Point Numbers for GPU Computation.) It works well in the DirectX version of the application, but I've found that on Android, the GLSL compiler is optimizing some of the code in such a way that, algebraically, behavior should be preserved, but in reality it changes behavior because the optimizations are throwing away floating point error. For example, in the following:

vec2 add(float a, float b) {
    float sum = a + b;
    float err = b - (sum - a);
    return vec2(sum, err);
}

the error value e gets simplified to 0 by the compiler since that's true algebraically, but of course that is not always the case when floating point error is taken into account.

I tried "#pragma optimize (off)", but it's not standard and had no effect. The only hack I've found that works is to create a "zero" uniform float that remains set to 0 and add that to the offending values in strategic places, so a working version of the above function would be:

vec2 add(float a, float b) {
    float sum = a + b;
    sum += zero;
    float err = b - (sum - a);
    return vec2(sum, err);
}

This is obviously not ideal. 1) It's a PITA to track down where this is necessary, and 2) it's compiler dependent. Another compiler may not need it, and another one could conceivably optimize the e value down to zero. Is there a "correct" way to solve this problem and make sure the GLSL compiler doesn't optimize away actual behavior?

Edit:

While the technical answer looks to remain "no", I've found a better work-around and wanted to document it here. The "zero" uniform method did indeed start to fail with more complicated expressions/chained operations. The workaround I found was to create two functions for addition and subtraction:

float plus_frc(float a, float b) {
    return mix(a, a + b, b != 0);
}

float minus_frc(float a, float b) {
    return mix(0, a - b, a != b);
}

(The "frc" stands for both "force" and "farce", because you're forcing the operation, but the necessity is idiotic.) These replicate the functionality of (a + b) and (a - b), respectively, but in a way that the compiler shouldn't be able to optimize away, doesn't use branching and uses a fast builtin to do the work. So the above error-preserving "add" function becomes:

vec2 add(float a, float b) {
    float sum = plus_frc(a, b);
    float err = b - (sum - a);
    return vec2(sum, err);
}

Note that we do not always need to use our "frc" functions (e.g. the equation to find err), but only in places where the compiler could have done breaking optimizations.


Solution

  • No. There is no binding way to control optimizations in GLSL. If the compiler feels that it's reasonable to assume that your error term is zero, then it will be zero.