Search code examples
colorsgraphicsglslshaderwebgl

Why do fragment shader colors an image based on the fragment coordinate as a saturated color?


The following code colors a image based on its fragment positions:

Vertex shader:

varying vec2 vXY;   
void main(void) {
    vXY = position.xy;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

Fragment shader:

precision mediump float;
varying vec2 vXY;       
void main(void) {   
    vec4 color = vec4(0.0, 0.0, 0.0, 1.0);
    color.x = vXY.x;
    color.y = vXY.y;            
    gl_FragColor = color;
}

It separates the image into 4 squares of colors, as the picture bellow:

enter image description here

From this, I can clearly see the cartesian coordinate system, which is fine, but what I don't understand is: why is the brightness of the colors as if it were constant 1.0?

From my understanding, since the range of the (x, y) coordinates is from -1 to 1 in the real numbers domain, then each square of color should get gradually brighter as the position changes, but this does not happen. Why?


Solution

  • The "position" attribute contains the original coordinates of the vertex specification. The components of that attribute are only in the range [-1, 1] if you have specified them in the range [-1, 1]. The normalized device coordinates are in the range [-1, 1]. You get the normalized device coordinate by dividing the xyz components of the clip space coordinate (gl_Position) by its w component (Perspective divide):

    varying vec2 vXY;   
    void main(void) {
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
        vXY = gl_Position.xy / gl_Position.w;
    }