Search code examples
opengl-eswebglvertex-shaderheightmap

WebGL heightmap using vertex shader, using 32 bits instead of 8 bits


I'm using the following vertex shader (courtesy http://stemkoski.github.io/Three.js/Shader-Heightmap-Textures.html) to generate terrain from a grayscale height map:

uniform sampler2D bumpTexture;
uniform float bumpScale;

varying float vAmount;
varying vec2 vUV;

void main()
{
  vUV = uv;
  vec4 bumpData = texture2D( bumpTexture, uv );

  vAmount = bumpData.r; // assuming map is grayscale it doesn't matter if you use r, g, or b.

  // move the position along the normal
  vec3 newPosition = position + normal * bumpScale * vAmount;

  gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0);
}

I'd like to have 32-bits of resolution, and have generated a heightmap that encodes heights as RGBA. I have no idea how to go about changing the shader code to accommodate this. Any direction or help?


Solution

  • bumpData.r, .g, .b and .a are all quantities in the range [0.0, 1.0] equivalent to the original byte values divided by 255.0.

    So depending on your endianness, a naive conversion back to the original int might be:

    (bumpData.r * 255.0) + 
    (bumpdata.g * 255.0 * 256.0) + 
    (bumpData.b * 255.0 * 256.0 * 256.0) + 
    (bumpData.a * 255.0 * 256.0 * 256.0 * 256.0)
    

    So that's the same as a dot product with the vector (255.0, 65280.0, 16711680.0, 4278190080.0), which is likely to be the much more efficient way to implement it.