Search code examples
openglgraphicstexturesshader

Boolean texture


I need an efficient way of fetching booleans from a texture with openGL. The boolean array will be a huge 3D array, I can't afford wasted space, meaning I can't afford to encode booleans into 1 byte each.

Right now my solution is to encode it into a 1D buffer texture consisting of integers. Using bitwise operations in the shader, i can fetch booleans.

There's two performance eating problems with my solution. The index I calculate for the boolean is naturally a 3d coordinate, which means if I want to be efficient and take advantage of hardware, I should use a 3D texture to cut the performance penalty of the 3D -> 1D index conversion (am I correct in thinking this?)

The other problem, not as bad as the above one, is decoding the boolean from the integer fetch.

I notice that if i encode the booleans inside of a 3D texture, I would have to do a different but still as costly 3D -> 1D index conversion anyway because storing bits inside of an integer is 1D anyway. Basically, it would be like each texel storing a cube of booleans, which you index with a 1D coordinate. Therefore I don't appear to be gaining any performance because of the need to decode things from the 3D texture also.

Is my current solution the best option or is there something better? I'm wondering if any openGL supported compressed image formats would be of use here?


Solution

  • I should use a 3D texture to cut the performance penalty of the 3D -> 1D index conversion (am I correct in thinking this?)

    Actually, if you're clever in doing it, that conversion can be done very efficiently. If your texture dimensions are powers of two it boils down to bit shifts and masking.

    I notice that if i encode the booleans inside of a 3D texture, I would have to do a different but still as costly 3D -> 1D index conversion anyway because storing bits inside of an integer is 1D anyway. Basically, it would be like each texel storing a cube of booleans, which you index with a 1D coordinate. Therefore I don't appear to be gaining any performance because of the need to decode things from the 3D texture also.

    That is, what I'd have recommended. The conversion is trivially to implement using bit operations. You can go two units in each direction. two → binary. You have three dimensions, so that are exactly 3 bits. So to find the specific bit in your 8-bit cube you just OR the x, y and z sub-texel position (shifted, by 0, 1 and 2 to higher bits respectively), i.e.

    subbit = (x & 1) | ((y & 1) << 1) | ((z & 1) << 2);
    

    If you can make sure, that x, y and z are only 1 or zero you can save the … & 1 masking.

    Given a certain integer texture coordinate t, you get the x, y and z sub-texel coordinate by masking the LSB of the grand texture coordinate and the texel to fetch from by shifting 1 to lower bits.

    ivec3 sub_texel = ivec3(texcoord.x  & 1, texcoord.y  & 1,  texcoord.z & 1);
    ivec3     texel = ivec3(texcoord.x >> 1, texcoord.y >> 1, texcoord.z >> 1);
    

    If texcoord is a float vec3 in the [0; 1] range you first have to bring that into integer coordinates, of course.

    I'm wondering if any openGL supported compressed image formats would be of use here?

    No, because some are based on lossy compression.