Search code examples
javascriptthree.jswebglwebgl2

Overhang detection shader - how to return the coordinates of the vertices?


I'm trying to write a support generation app in browser using three.js, I have tried many approaches and all of them were slow, so now I decided to make the shader compute the overhang position and my program build supports to those points.

The overhang detection shader outputs: overhang detection

Now the problem is I cannot figure out how to return those areas in red to the CPU /main JavaScript app to generate simple supports to those points, I read somewhere here about a GPU CPU approach involving a FBO but can't understand this, is there any way to get the red areas coordinates back to CPU?

I could also calculate this in the vertex shader to update the position of non overhang vertices to be 0,0,0, but the problem is that the vertex position in three JavaScript doesn't update in that way, if there is some way to get updated vertex positions after vertex shader execution it could be a solution.

Maybe transform feedback? How can I use transform feedback from three.js?


Solution

  • If you want to just get the rendered image (like the one you've linked in the question), you can use THREE's wrapper around readPixels readRenderTargetPixels. That will give you values of pixels of the image as an array, you can iterate over it and find red areas. Also, since it seems that your fragment shader does pretty much binary decision (black or red), you can use other channels to store additional information, e.g. in vertex shader:

    // ...
    varying vec3 position;
    // ...
    void main(void) {
        // ...
        position = gl_Position.xyz / gl_Position.w;
    }
    

    And in fragment shader:

    // ...
    varying highp vec3 position;
    // ...
    void main(void) {
        // ...
        gl_FragColor.xyz = 0.5 * (position + 1.0); // position'll be in (-1, 1) range, where as gl_FragColor's clamped to (0, 1)
        gl_FragColor.w = isOverhang ? 1.0 : 0.0;
    }
    

    Then in JS code:

    // ...
    const pixelBuffer = new Uint8Array(4 * w * h);
    renderer.readRenderTargetPixels(renderTarget, 0, 0, w, h, pixelBuffer);
    for (let y = 0, offset = 0; y < h; ++y) {
        for (let x = 0; x < w; ++x, offset += 4) {
    
             // does pixel correspond to overhang area?
             if (pixelBuffer[offset + 3] > 0) {
                 const posX = 2 * pixelBuffer[offset] / 255 - 1;
                 const posY = 2 * pixelBuffer[offset + 1] / 255 - 1;
                 const posZ = 2 * pixelBuffer[offset + 2] / 255 - 1;
                 // ...
             }
        }
    }
    

    However, 8 bit precision may be not enough for your purposes. In that case, you can use FLOAT or HALF_FLOAT render targets (if a browser supports them).

    You may also try GPGPU approach. Basically, most of the time its using fragment shaders to compute some value(s), which then would be stored in a texture (usually, FLOAT or HALF_FLOAT too) and either read back to CPU or sampled in subsequent drawing to use computed values. There's a lot of information about GPGPU in WebGL, e.g. this.

    Regarding transform feedback. Yes, it's specifically used to store results of a vertex shader in some buffer, which again can be read back to CPU (rarely) or reused on GPU, for example as an input for another or even the same vertex shader. But TF's available only in WebGL 2.