Search code examples
c++openglglsltexturesrender-to-texture

OpenGL, render to texture with floating point color without clipping value


I am not really sure what the English name for what I am trying to do is, please tell me if you know.

In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.

To do this, I set up a render buffer to render to this texture as follows (This is C++):

//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)

glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);

//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before

Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.

Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:

#version 400 core    
void main()
{
    gl_FragColor =  vec4(2.0,-2.0,2.0,2.0);
}

After rendering to this texture, I send this texture to the program which should use it like any other texture:

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program

Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.

#version 400 core

uniform sampler2D lightSampler;

void main()
{
    color = vec4(0,0,0,1);
    if (texture(lightSampler,fragment_pos_uv).r>1.0)
        color.r=1;
    if (texture(lightSampler,fragment_pos_uv).g<0.0)
        color.g=1;
}

If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:

#version 400 core

uniform sampler2D lightSampler;

void main()
{
    color = vec4(0,0,0,1);
    if (texture(lightSampler,fragment_pos_uv).r==1.0)
        color.r=1;
    if (texture(lightSampler,fragment_pos_uv).g==0.0)
        color.g=1;
}

And I got

My testing scene, Nothing should be yellow if this worked

The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.

So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.

Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.


Solution

  • The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);