Ignoring coding patterns and code clarity/quality:
This is a question I just don't know if it's inherently bad or inconsequential. I can't find enough on the inner workings of assigning to say: gl_FragColor in WebGL 1.0 or to an out variable in WebGL 2 (layout(location = 0) out vec4 color).
Is there some inherent additional performance cost for doing something like:
void main() {
gl_FragColor = vec4(0., 0., 0., 1.);
vec4 val = gl_FragColor * something;
...
gl_FragColor = val;
}
Or is it better to work entirely with interim declared variables and then assign to the out value a single time?
void main() {
vec4 thing = vec4(0., 0., 0., 1.);
vec4 val = thing * something;
...
gl_FragColor = val;
}
I'm only guessing the answer is "no", there is no consequence. The driver can do whatever it wants/needs to get the correct answer. If gl_FragColor
is special the driver can make its own temp. GPU Vendors compete for perf. Shaders are translated to the assembly language of the GPU itself so it's unlikely gl_FragColor
is special except as a way to tell the compiler which value to actually output when it's all done computing.
I do it all the time. So does three.js as an example