Search code examples
javascriptglslshaderwebglvertex-shader

Accesing texture data in vertex shader for instancing


No debugging needed, just trying to understand a couple of conceptual things.

I'm following this tutorial to create instanced particles based on a text texture's alpha values. I'm still new to shaders so as you may imagine I have only used sampler2D and texture2D in the fragment shader with the desired UVs.

This tutorial is the first place where I have seen a texture used in a vertex shader. I do understand that images cannot be 'displayed' on screen by only sampling them in the vertex stage, and he was only trying to throw in a bunch of random points between 0 and 1 through the variable texCoord provided by Javascript as follows:

const num = 20000;
let texCoord = new Float32Array(num * 2);

for (let i = 0; i < num; i++) {

    // -0.5 to 0.5
    let x = (Math.random() * 2 - 1) * 0.5;
    let y = (Math.random() * 2 - 1) * 0.5;

    // This is setting the pixel spots for texture picking in verter shader
    texCoord.set([
        x + 0.5, 
        y + 0.5
    ], i * 2);
}

I'd like to understand:

  1. How the new coordinate system was created for the image (which is a png with transparency) with variable texCoord by saving random xy value on each of the num particles? I mean those texCoord points do not have a plane in the background to refer the 0 to 1 clipping coordinates from. And since vertex shader runs on each vertex, texCoord should have mapped the 0-1 value on the tiny instanced plane instead of the background. Yes?

  2. How is it making the "TEXT" image somehow occupy the full screen on an imaginary background plane in the vertex shader (through which we extract its alpha). Maybe some ability of the sampler2D function that I am missing?

I mean I was expecting a single non-instanced plane would be created that occupies the whole scene and we apply texture to that, only after that we could extract its alpha values. I've spent hours researching and reading and have ran out of articles and tutorials that could explain this.

I urge you to please explain in layman terms. Thank you.

Vertex Shader:

// Bring attributes
attribute vec3 offset;
attribute vec2 texCoord;
attribute vec2 a_lineHeight;

void main() {

    
    vec3 pos = position;
    v_uv = uv;




    // HERE the image is sampled using texCoord points.
    float tex1 = step( .8, texture2D(u_tex1, texCoord).r );
            


    

    v_alpha = tex1;

    float scale = mix(a_lineHeight.x, a_lineHeight.y * 5., u_lineHeight);
    pos.y *= scale;
    pos.x *= u_thickness;

    vec2 origin = texCoord * vec2(1., .5);
    
    float alpha = 1. - v_alpha;
    pos.xy += origin - vec2(0.5, 0.25) + alpha;

    gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

Fragment Shader:

    precision highp float;

varying float v_alpha;
uniform float u_time;    

void main() {
    vec3 color = vec3(1., 1., 1.);
    gl_FragColor = vec4(color, v_alpha);
}

enter image description here

Close up of instanced planes: enter image description here


Solution

  • Of course, the texture lookup has no effect on the position of the geometry. However, texCoord is used not only to sample the texture, but also to calculate the position in clip space:

    vec2 origin = texCoord * vec2(1., .5); 
    ...
    pos.xy += origin - vec2(0.5, 0.25) + alpha;
    ...
    gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
    

    So in this case texCoord is used twice. Once in texture2D and for the position in the clip space.