Search code examples
javascriptglslraytracingwebgl2

How can I send large arrays of objects to a fragment shader using WebGL2?


For a university assignment, I've written a raytracer in JavaScript using a Canvas with a 2D context. To make it faster, I'm converting it from a 2D context to a WebGL2 context.

In the original code, to handle meshes, I store Triangle data (vertex positions, vertex normals) in a large array, and Mesh data (first tri index, number of tris, material data) in another. I then loop through the arrays at render time to calculate ray intersections.

I would like to be able to do the same in my new implementation, but I haven't been able to wrap my head around it, as there is a limit to how many elements a Uniform array can hold.

Is there a way to send large arrays of objects like this to a fragment shader?

Currently, I am simply using a uniform array with a small size to store triangle data. I have not yet added meshes to my WebGL implementation.

I define my arrays in the frag shader like this:

uniform int NumSpheres;
uniform Sphere Spheres[MAX_SPHERES];

uniform int NumTris;
uniform Triangle Triangles[MAX_TRIS];

And calculate ray intersections with this function:

HitInfo TraceRay(Ray ray) 
{
    HitInfo closestHit;
    closestHit.dst = 1000000000000.0;

    for (int i = 0; i < MAX_SPHERES; i++)
    {
        if (i == NumSpheres) break;

        Sphere sphere = Spheres[i];
        HitInfo hitInfo = RaySphere(ray, sphere.position, sphere.radius);

        if (hitInfo.didHit && hitInfo.dst < closestHit.dst) 
        {
            closestHit = hitInfo;
            closestHit.material = sphere.material;
        }
    }

    for (int i = 0; i < MAX_TRIS; i++) 
    {
        if (i == NumTris) break;
        
        Triangle triangle = Triangles[i];
        HitInfo hitInfo = RayTriangle(ray, triangle);
        if (hitInfo.didHit && hitInfo.dst < closestHit.dst) {
            closestHit = hitInfo;
            closestHit.material.colour = vec3(1, 1, 1);
        }
    }

    return closestHit;
}

I'm populating the uniform array of triangles with this code at the moment:

let trianglesLocations = [];
for (let i = 0; i < triangles.length; i++) {
    trianglesLocations.push({
        posA: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].posA`),
        posB: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].posB`),
        posC: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].posC`),
        normalA: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].normalA`),
        normalB: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].normalB`),
        normalC: gl.getUniformLocation(raytraceProgram, `Triangles[${i}].normalC`),
    });
}

function setTriangles() {
    let numTrisLocation = gl.getUniformLocation(raytraceProgram, "NumTris");
    gl.uniform1i(numTrisLocation, triangles.length);
    for (let i = 0; i < triangles.length; i++) {
        gl.uniform3fv(trianglesLocations[i].posA, [triangles[i].posA.x, triangles[i].posA.y, triangles[i].posA.z]);
        gl.uniform3fv(trianglesLocations[i].posB, [triangles[i].posB.x, triangles[i].posB.y, triangles[i].posB.z]);
        gl.uniform3fv(trianglesLocations[i].posC, [triangles[i].posC.x, triangles[i].posC.y, triangles[i].posC.z]);
        gl.uniform3fv(trianglesLocations[i].normalA, [triangles[i].normalA.x, triangles[i].normalA.y, triangles[i].normalA.z]);
        gl.uniform3fv(trianglesLocations[i].normalB, [triangles[i].normalB.x, triangles[i].normalB.y, triangles[i].normalB.z]);
        gl.uniform3fv(trianglesLocations[i].normalC, [triangles[i].normalC.x, triangles[i].normalC.y, triangles[i].normalC.z]);
    }
}

But the limit on a uniform array size means that I can't use this method for rendering several meshes that may have >200 triangles each.

This current code renders perfectly:

Render Result

I just need a way to handle potentially many triangles.


Solution

  • Uniform arrays are really small, and as a result, you will hit these limits very fast.

    Using a texture is the most common workaround used to encode many vectors/normals or other data. The texture usually consists of 3 (RGB) or 4 components (RGBA) and allows the storage of 3 or 4 values from 0 to 255. This range of values applies very strict limitations on the data you can pass using texture. 0-255 could be enough to encode normally (for bump mapping, for example), but it is not enough for coordinates.

    To overcome this limitation, you can use 2 bytes per value. This will give you a range of values of 0-65535 but will take twice as much space on the texture. With this approach, you will have to pack x, y, and z, for example, as 6 bytes, and this will take 1,5 RGBA pixels of the texture. For 4096x4096 texture, this approach will allow you to store 2'796'202 vertices, which is a lot! Then, you can use uniforms to send additional details like how many vertices or triangles are passed with the texture or other supplement data. You can even use a single RGBA to pack 32 bits (4 bytes) value, and you will still be able to pack 1'398'101 vertices, which is a lot!

    The only downside is that you will have to decompress these values from RGBA into a single 32-bit number by multiplying/shifting components, which could be slow on large numbers of vertices.

    Please let me know if this helps.