Search code examples
ipadglslglsles

Weird GLSL float color value in fragment shader on iOS


I am trying to write a simple GLSL fragment shader on an iPad2 and I am running into a strange issue with the way that OpenGL seems to represent a 8bit "red" value onces a pixel value has been converted into a float as part of the texture upload. What I want to do is pass in a texture that contains a large number of 8bit table indexes and a 32bpp table of the actual pixel values.

My texture data looks look like this:

  // Lookup table stored in a texture

  const uint32_t pixel_lut_num = 7;
  uint32_t pixel_lut[pixel_lut_num] = {
    // 0 -> 3 = w1 -> w4 (w4 is pure white)
    0xFFA0A0A0,
    0xFFF0F0F0,
    0xFFFAFAFA,
    0xFFFFFFFF,
    // 4 = red
    0xFFFF0000,
    // 5 = green
    0xFF00FF00,
    // 6 = blue
    0xFF0000FF
  };

  uint8_t indexes[4*4] = {
    0, 1, 2, 3,
    4, 4, 4, 4,
    5, 5, 5, 5,
    6, 6, 6, 6
  };

Each texture is then bound and the texture data is uploaded like so:

  GLuint texIndexesName;
  glGenTextures(1, &texIndexesName);
  glActiveTexture(GL_TEXTURE0);
  glBindTexture(GL_TEXTURE_2D, texIndexesName);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

  glTexImage2D(GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, indexes);

  GLuint texLutName;
  glGenTextures(1, &texLutName);
  glActiveTexture(GL_TEXTURE1);
  glBindTexture(GL_TEXTURE_2D, texLutName);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixel_lut_num, 1, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, pixel_lut);

I am confident the texture setup and uniform values are working as expected, because the fragment shader is mostly working with the following code:

varying highp vec2 coordinate;
uniform sampler2D indexes;
uniform sampler2D lut;

void main()
{
  // normalize to (RED * 42.5) then lookup in lut
  highp float val = texture2D(indexes, coordinate.xy).r;
  highp float normalized = val * 42.5;
  highp vec2 lookupCoord = vec2(normalized, 0.0);
  gl_FragColor = texture2D(lut, lookupCoord);
}

The code above takes an 8 bit index and looks up a 32bpp BGRA pixel value in lut. The part that I do not understand is where this 42.5 value is defined in OpenGL. I found this scale value through trial and error and I have confirmed that the output colors for each pixel are correct (meaning the index for each lut table lookup is right) with the 42.5 value. But, how exactly does OpenGL come up with this value?

In looking at this OpenGL man page, I find mention of two color constants GL_c_SCALE and GL_c_BIAS that seem to be used when converting the 8bit "index" value to a floating point value internally used inside OpenGL. Where are these constants defined and how could I query the value at runtime or compile time? Is the actual floating point value of the "index" table the real issue here? I am at a loss to understand why the texture2D(indexes,...) call returns this funky value, is there some other way to get a int or float value for the index that works on iOS? I tried looking at 1D textures but they do not seem to be supported.


Solution

  • Your color index values will be accessed as 8Bit UNORMs, so the range [0,255] is mapped to the floating point interval [0,1]. When you access your LUT texture, the texcoord range is also [0,1]. But currently, you only have a width of 7. So with your magic value of 42.5, you end up with the following:

    INTEGER INDEX: 0: FP: 0.0 ->  TEXCOORD: 0.0 * 42.5 == 0.0
    INTEGER INDEX: 6: FP: 6.0/255.0 ->  TEXCOORD: (6.0/255.0) * 42.5 == 0.9999999...
    

    That mapping is close, but not 100% correct though, since you do not map to texel centers. To get the correct mapping (see this answer for details), you would need something like:

    INTEGER INDEX:   0: FP:         0.0 ->  TEXCOORD: 0.0 + 1.0/(2.0 *n)
    INTEGER INDEX: n-1: FP: (n-1)/255.0 ->  TEXCOORD: 1.0 - 1.0/(2.0 *n)
    

    where n is is pixel_lut_size from your code above.

    So, a single scale value is not enough, you actually need an additional offset. The correct values would be:

    scale=  (255 * (1 - 1/n)) / (n-1)    ==  36.428...
    offset= 1/(2.0*n)                    ==  0.0714...
    

    One more thing: you souldn't use GL_LINEAR for the LUT minification texture filter.