Search code examples
opengl-eswebglshaderfragment-shader

Finding the size of a screen pixel in UV coordinates for use by the fragment shader


I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.

My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.

How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.

My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.

My current vertex shader is quite simple:

  varying vec2 vUv;
  varying vec3 vPosition;
  varying vec3 vNormal;
  varying vec3 vViewDirection;

   void main() {
       vUv = uv;
       vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
       vPosition = (modelMatrix *
           vec4(position,1.0)).xyz;
       gl_Position = projectionMatrix * mvPosition;
      vec3 transformedNormal = normalMatrix *  vec3( normal );
      vNormal = normalize( transformedNormal );
      vViewDirection = normalize(mvPosition.xyz);
   }

How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?

Constraints: I'm working in WebGL, inside three.js.

Here is an example of one image, where the user has zoomed perspective in close to my texture: Here is an image zoomed in

Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.

Here is the image zoomed out

Addendum (re LJ's comment)... No, I don't think mipmapping will do what I want here, for two reasons.

First, I'm not actually mapping the texture; that is, I'm doing something like this:

   gl_FragColor = texture2D(mappingtexture,  texture2d(vec2(inputtexture.g,inputtexture.r))

The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.

Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.

Solution: Thanks to the answer below, I've now got a working solution.

In my javascript code, I had to add:

  extensions: {derivatives: true}

to my declaration of the ShaderMaterial. Then in my fragment shader:

   float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
  float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
  float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
  float pixel_ratio = (dU*(uInputTextureResolution));

This allows me to do things like this:

   float x = ... the u coordinate in pixels in the input texture
   float y = ... the v coordinate in pixels in the input texture
   vec4 inc = get_encoded_adc_value(x,y);

   // Extremum mapping:
   if(pixel_ratio>2.0) {
    inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
   }
   if(pixel_ratio>3.0) {
    inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));    
  }

The effect is subtle, but definitely there! The lines pop much more clearly.

Thanks for the help!


Solution

  • You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.

    If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.