Search code examples
javascriptglslwebglfragment-shadershadertoy

WebGL/GLSL - How does a ShaderToy work?


I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.

From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.

However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.

How does that work?

My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...

Can anyone explain how Shadertoy works?


Solution

  • ShaderToy is a tool for writing pixel shaders.

    What are pixel shaders?

    If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.

    So attributes are always the same and so is a vertex shader:

    positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
    uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
    

    And that quad is rendered as TRIANGLE_STRIP. Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.

    Vertex shader:

    attribute vec2 aPos;
    attribute vec2 aUV;
    varying vec2 vUV;
    
    void main() {
        gl_Position = vec4(aPos, 0.0, 1.0);
        vUV = aUV;
    }
    

    And fragment shader would then look something like this:

    uniform vec2 uScreenResolution;
    varying vec2 vUV;
    
    void main() {
        // vUV is equal to gl_FragCoord/uScreenResolution
        // do some pixel shader related work
        gl_FragColor = vec3(someColor);
    }
    

    ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.

    For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short: You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help: How to do ray tracing in modern OpenGL?

    Hope this helps.