Search code examples
webglwebgl2

Leveraging webGL to find bounding boxes


Given some vertices with xyz coordinates it is easy to obtain an xyz aligned bounding box, just take the min/max xyz values from the vertices. Ideally, these values only have to be found once and can be found before any rendering takes place.

My question is: if rotating, scaling, or translating the object, what's the best way to calculate the new xyz values which bound the object? Do I have to go through all the vertices after each transform and find new min/max xyz values?

Given GLSL code

in vec4 a_position;
in vec4 a_color;

// transformation matrix
uniform mat4 u_matrix;

out vec4 v_color;

void main() {
  gl_Position = u_matrix * a_position;

  v_color = a_color;
}

My idea: would adding new out variables for bounding box coordinates work? Or is there a better way?

in vec4 a_position;
in vec4 a_color;

// transformation matrix
uniform mat4 u_matrix;

out vec4 v_color;
out vec3 max_bounds;
out vec3 min_bounds;

void main() {
  vec4 position = u_matrix * a_position;
  if(position.x > max_bounds.x){
    max_bounds.x = position.x;
  }
  if(position.y > max_bounds.y){
    max_bounds.y = position.y;
  }
  if(position.z > max_bounds.z){
    max_bounds.z = position.z;
  }
  // ...

  gl_Position = position;
  v_color = a_color;
}

Solution

  • You can't, since your vertex shader code (and all other shader code) is executed in parallel and the outputs only go to the next stage (fragment shader in your case). An exemption is transform feedback where the outputs of a vertex shader can be written to buffers, however you can only use that to map data not gather / reduce it. A significant chunk of the efficiency/performance advantage of GPUs is due to executing code in parallel. The ability to share data among those parallel threads is very limited and not accessible via WebGL to begin with. On top of all that, your task (finding the min/max extents in a vertex array) is inherently sequential as it requires shared data(the min and max values) available and current to all threads.

    Since AABBs are inherently rather loose fitting, one (if not the) common approach is to transform the 8 corner vertices of the AABB(of the untransformed mesh) and gather the AABB from those.

    Theoretically speaking you could store the vertex positions in a floating point texture, transform those with a fragment (instead of vertex) shader, write it back to a texture, then do a bunch of gather passes where you gather the min max values for chunks of X by X size (e.g. 64x64) and write that back to a set of increasingly smaller textures until you've reached a 1x1 pixel texture which you'd then read your result from using readPixels. That said, this is simply not worth the effort (and probably slower for meshes with lower vertex counts) just to get a slightly better fitting AABB, if you really need that you'd rather create a compound volume comprised of better fitting bounding shapes and than gather a combined AABB from those.