Search code examples
openglmemory-barrierscompute-shader

Deallocate buffer after reading from GL compute shader


I have a GPU implementation of Marching Cubes which uses a sequence of 6 GL compute shaders, with each reading from buffers written to by previous shaders, after the appropriate memory barriers. The buffers used in earlier stages hold temporary marker variables, and should be resized to 0 when no longer needed, but not deleted as I'll want them again for later runs.

In some stages, I need to read from a buffer in a shader then deallocate it immediately after the shader completes, before allocating buffers for the next shader stage. My question is how to do this safely. The memory barrier docs talk about ensuring all writes are completed before allowing another shader to read, but say nothing about reads in the first shader.

If I do:

glUseProgram(firstShader);
glDispatchCompute(size,1,1);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glNamedBufferData(firstBuffer,0,NULL,GL_DYNAMIC_DRAW);
glNamedBufferData(secondBuffer,1000000,&data,GL_DYNAMIC_DRAW);
glUseProgram(secondShader);
glDispatchCompute(size,1,1);

is firstBuffer guaranteed not to be resized until firstShader is done reading from it? If not, how do I make this happen?


Solution

  • and should be resized to 0 when no longer needed, but not deleted as I'll want them again for later runs.

    Resizing a buffer is equivalent to deleting it and allocating a new buffer on the same id.

    In some stages, I need to read from a buffer in a shader then deallocate it immediately after the shader completes, before allocating buffers for the next shader stage. My question is how to do this safely.

    Just delete it. Deleting a buffer in the first stage only deletes the id. The id is just another reference to the actual buffer object. When resizing or deleting a buffer only that association between id and the actual buffer is severed. Resizing actually creates a new buffer and reassociates the id with it. In fact calling glBufferData will do the same thing (in contrast to glBufferSubData). This is called "orphaning".

    The actual buffer is deallocated once the last reference to it, either by use or from an id, goes down.