Search code examples
openglcudatexturesbufferopengl-4

OpenGL 4.5 Buffer Texture : extensions support


I use OpenGL Version 4.5.0, but somehow I can not make texture_buffer_object extensions work for me ("GL_EXT_texture_buffer_object" or "GL_ARB_texture_buffer_object"). I am quite new with OpenGL, but if I understand right, these extensions are quite old and even already included in the core functionality...

I looked for the extensions with "OpenGL Extensions Viewer 4.1", it says they are supported on my computer, glewGetExtension("GL_EXT_texture_buffer_object") and glewGetExtension("GL_ARB_texture_buffer_object") both also return true.

But the data from the buffer does not appear in the texture sample (in the fragment shader the texture contains only zeros).

So, I thought maybe the extensions are somehow disabled by default, and I included enabling these extensions in my fragment shader:

#version 440 core
#extension GL_ARB_texture_buffer_object : enable
#extension GL_EXT_texture_buffer_object : enable

And now I get such warnings at run-time:

***GLSL Linker Log:
Fragment info
-------------
0(3) : warning C7508: extension ARB_texture_buffer_object not supported
0(4) : warning C7508: extension EXT_texture_buffer_object not supported

Please see the code example below:

//#define GL_TEXTURE_BIND_TARGET GL_TEXTURE2D
#define GL_TEXTURE_BIND_TARGET GL_TEXTURE_BUFFER_EXT

.....
    glGenTextures(1, &texObject);
    glBindTexture(GL_TEXTURE_BIND_TARGET, texObject);   

            GLuint bufferObject;
            glGenBuffers(1, &bufferObject);

            // Make this the current UNPACK buffer (OpenGL is state-based)
            glBindBuffer(GL_TEXTURE_BIND_TARGET, bufferObject);
            glBufferData(GL_TEXTURE_BIND_TARGET, nWidth*nHeight*4*sizeof(float), NULL, GL_DYNAMIC_DRAW);
            float *test = (float *)glMapBuffer(GL_TEXTURE_BIND_TARGET, GL_READ_WRITE); 
            for(int i=0; i<nWidth*nHeight*4; i++)
                test[i] = i/(nWidth*nHeight*4.0);
            glUnmapBuffer(GL_TEXTURE_BIND_TARGET); 

            glTexBufferEXT(GL_TEXTURE_BIND_TARGET, GL_RGBA32F_ARB, bufferObject);


    //glTexImage2D(GL_TEXTURE_BIND_TARGET, 0, components, nWidth, nHeight,
    //                0, format, GL_UNSIGNED_BYTE, data);

............

So if I use GL_TEXTURE2D target and load some data array directly to the texture, everything works fine. If I use GL_TEXTURE_BUFFER_EXT target and try to load texture from the buffer, then I get an empty texture in the shader.

Note: I have to load texture data from the buffer because in my real project I generate the data on the CUDA side, and the only way (that I know of) to visualize data from CUDA is using such texture buffers.

So, the questions are : 1) why I become no data in the texture, although the OpenGL version is ok, and Extensions Viewer shows the Extensions as supported ? 2) why trying to enable the extensions in the shader fails ?

Edit details : I updated the post, because I found out the reason for "Invalid Enum" error about that I mentioned first, it was caused by glTexParameteri that is not allowed for buffer textures.


Solution

  • I solved this. I was in a hurry and stupidly missed a very important thing on a wiki page:

    https://www.opengl.org/wiki/Buffer_Texture

    Access in shaders

    In GLSL, buffer textures can only be accessed with the texelFetch​ function. This function takes pixel offsets into the texture rather than normalized texture coordinates. The sampler types for buffer textures are samplerBuffer​.

    So in GLSL we should use buffer textures like this:

    uniform samplerBuffer myTexture;
    
    void main (void)  
    {  
       vec4 color = texelFetch(myTexture, [index]);
    

    not like usual textures:

    uniform sampler1D myTexture;
    
    void main (void)  
    {  
       vec4 color = texture(myTexture, gl_FragCoord.x);
    

    Warnings about not supported extensions : I think I get them because this functionality is included in the core since OpenGL 3.1, so they should not be enabled additionally any more.