Found two ways to calculate the fog coordinates in vertex shader:
#version 300 es
uniform mat4 u_mvMatrix;
in vec4 a_position;
smooth out float v_fog_factor;
const float startFog = 10.0;
const float endFog = 140.0;
float getFogFactor(float fogCoord) { // linear fog
float factor = (endFog - fogCoord) / (endFog - startFog);
factor = 1.0 - clamp(factor, 0.0, 1.0);
return factor;
}
void main() {
vec4 v_eye_space_pos = u_mvMatrix * a_position;
// VARIANT I: obtain cartesian coordinate z
float fogCoord = abs(v_eye_space_pos.z / v_eye_space_pos.w);
// VARIANT II: obtain distance
float fogCoord = length(v_eye_space_pos);
v_fog_factor = getFogFactor(fogCoord);
...
}
Did not notice visual differences.
Question: Is there a difference in these two variants? If not, which one is better to use in terms of performance? Thanks in advance!
Found in the literature (Randi J. Rost, OpenGL): the first variant - is to approximate the depth value using the absolute value of the coordinate z in the viewing space. With a very large viewing angle, such an approximation can provoke a noticeable image defect (a slight fog at the edge). In this case, it can be calculated as the distance from the viewpoint to the fragment (second variant). This method involves calculating the square root, which will slightly degrade performance.