Search code examples
openglglblendfunc

Is it possible to accumulate fragment numbers on a uint32 format texture?


I want to count the number of fragments on each pixel (with depth test disabled). I have tried enabling blending and setting glBlendFunc(GL_ONE,GL_ONE) to accumulate them. This works just fine with a float32 format texture binding to a FBO, but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task. However, I can't get the expected behavior. It seems each fragment just overwrites the texture. I just wonder if there's other methods to do the accumulation on integer format textures.


Solution

  • However, I can't get the expected behavior. It seems each fragment just overwrites the texture.

    That's because the blending stage is not available on pure integer framebuffer formats.

    but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task.

    Well, is it? What does "intuitive" even mean here? First of all, a GL_R16F format is probably enough for a reasonable of overdraw, and it would reduce bandwidth demands a lot (which seems to be the limiting factor for such a pass).

    I just wonder if there's other methods to do the accumulation on integer format textures.

    I can see two ways, I doubt that either of them is really more "intuitive", but I you absolutely need the result as integer, you could try these:

    1. Don't use a framebuffer at all, but use image load/store on an unsigned integer texture in the fragment shader. Ust atomic operations, in particular imageAtomicAdd to count the number of fragments at each fragment location. Note that if you go that route, your're outside of the GL's automatic synchronization paths, and you'll have to add an exlicit glMemoryBarrier call after that render pass.

    2. You could also just use a standard normalized integer format like GL_RED8 (or GL_RED16) use blending as before, but have the fragment shader output 1.0/255.0 (or 1.0/65535.0, respectively). The data which ends up in the framebuffer will be integer in the end. If you need this data on the CPU, you can directly read it back, if you need it on the GPU, you can use glTextureView to reinterpret the data as an unnormalized integer texture without a copy/conversion step.