Search code examples
openglglslbuffertextures

GLSL : Adressing pixel of a 2D texture loaded via TEXTURE_BUFFER


I am doing some test toload an image via an GL_TEXTURE_BUFFER. I am stuck on correctly adressing pixel from texelFetch. The goal is to just display a R8 texture.

Here is how i create the buffer :

sz = width*height*sizeof(uint8)
glGenBuffers( 1, &pbo );        
glBindBuffer( GL_TEXTURE_BUFFER, pbo );
glBufferStorage( GL_TEXTURE_BUFFER, sz, NULL, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT |GL_MAP_COHERENT_BIT );

I upload to it via a memcpy, data are uint8

And finally i link it to the GL_TEXTURE_BUFFER texture

glBindTexture( GL_TEXTURE_BUFFER, texture );
glTexBuffer( GL_TEXTURE_BUFFER, GL_R8, pbo );

At this step, i just draw a simple rectangle and pass uv coords to fragment shader the draw function is :

the size is obvioulsly 640*480

uniform samplerBuffer texture;
in vec2 uv;
...
float f;
f =  640. * 480. * (uv.y) +  640. * (uv.x);
index = int(f);
color.r = texelFetch( texture, index )[0];

I have tried to normalize color.r ( / 255.f ) but nothing work. The strang behaviour is that if i modyfy the geometry of the window, the shader output change constantly and at some specific dimension i can see the texture. I don't understand that because my uv coordinates doesn't change ( regular 0,1 1,1 ... for simple quad ).

My indexing computation is incorrect, that's why i wanted to know if someone could point the problem in.

( i have tested the the shader with a regular GL_TEXTURE_2D a GL_PIXEL_UNPACK_BUFFER, a sampler2D and there is no problem )

EDIT 1 :

i have tried to pass image dimension as uv coords.

uv_min = {0,0}
uv_max = {640,480}

and compute index in fragment as :

int index = int( 640.*uv.y + 640.*uv.x)

a fail.


Solution

  • Your 2D-to-1D index calculation is off:

    float f;
    f =  640. * 480. * (uv.y) +  640. * (uv.x);
    index = int(f);
    

    Assuming a 640x480 pixel image, and normalized texture coords in [0,1], the correct way needs to be

     int index = 640 * int(480 * uv.y) + int(640 * uv.x);
    

    The strang behaviour is that if i modyfy the geometry of the window, the shader output change constantly and at some specific dimension i can see the texture.

    Your 640. * 480. * (uv.y) will end up on an arbitrary pixel in-between the line, not at the line beginning, unless uv.y happens to fall exactly to an integer multiple of 1/480, which depends on the factor of rescaling you apply during sampling.

    Also, from your comment:

    Correct me if i am wrong but as far i know on GL, when i use a PIXEL_UNPACK_BUFFER, and update the texture, the buffer is copied to the real texture memory block, i want to skip this copy by using a texture buffer.

    Technically, that is correct, but it is missing the bigger picture. PBOs are a great way to asynchronously update textures / texture streaming.

    Using TBOs instead, you do actually bypass that copy, but you are also bypassing the TMU hardware units on your GPU when sampling from it, which means you completely loose all the texture filters and wrap modes etc.

    Furthermore, 2D textures aren't stored as in a linear way as they are in your input buffer to TexImage (and your TBO here), but in a tiled-and-swizzeled format to optimize cache hits when sampling. By bypassing the texture functionality completely, you will actually have a higher cost actually sampling the data, and your performance isn't neccessarily higher that way.

    TBOs aren't really meant for texture data, they are for accessing arbitrary buffer data not suitable for textures (and bigger than what UBOs allow), and basically, they are completely superseded by SSBOs in modern GL.