Search code examples
openglopengl-esglslmipmaps

How does a GLSL sampler determine the minification, and thus the mipmap level, of a texture?


I am working with OpenGL ES (via WebGL), but I think this question is applicable to the full OpenGL profile as well.

Suppose I create an OpenGL texture with full mipmap levels, and I set its TEXTURE_MIN_FILTER to NEAREST_MIPMAP_NEAREST. Also suppose that I have a fragment shader that samples this texture. The mipmap level is chosen based on the degree of minification of the texture, but how is the degree of minification chosen?

In my case, I am synthesizing (inside the shader) the texture coordinates that I use to sample my texture. In fact, my texture coordinates are not based on any incoming varyings. Even though I have mipmapping enabled on this texture, it doesn't seem to have any effect. Is this expected? Do I need to compute the LOD myself and use the bias parameter to texture2D? (there is no texture2DLOD, since I'm using ES)


Solution

  • Blocks of adjacent pixels are computed in parallel. (IIRC the PowerVR chips do a 4x4 block at a time, for example.) When you call texture2D in your fragment shader, the sampler is fetching all 16 samples for all 16 pixels at once, and so has adjacency information needed to calculate the minification level. This is part of why it's so important for adjacent pixels to sample from nearby areas of the texture.

    Note that this only applies to fragment shaders. In vertex shaders the first mipmap level is always used, (unless you use the Lod version of texture2D is used.)