In a Round Robin fashion, you usually have a few buffers and you cycle between these buffers, how you manage GLFW callbacks in this situation?
Let's suppose that you have 3 buffers, you send draw commands with a specified viewport in the first one, but when the cpu is processing the second one, it gets a callback of a window resize for example, the server may be rendering whatever you sent with the previous viewport size yet, causing some "artifacts", and this is just a example, but it will happen for literally everything right? A easy fix would be to process the callbacks(the last ones received) just after rendering the last buffer, and block the client until the server processed all the commands, is that correct(what would imply a frame delay per buffer)? Is there something else that could be done?
OpenGL's internal state machine takes care of all of that. All OpenGL commands are queued up in a command queue and executed in order. A call to glViewport
– and any other OpenGL command for that matter – affects only the outcome of the commands that follow it, and nothing that comes before.
There's no need to implement custom round robin buffering.
This even covers things like textures and buffer objects (with the notable exceptions of buffer objects that are persistent mapped). I.e. if you do the following sequence of operations
glDrawElements(…); // (1)
glTexSubImage2D(GL_TEXTURE_2D, …);
glDrawElements(…); // (2)
The OpenGL rendering model mandates that glDrawElements
(1) uses the texture data of the bound texture object as it was before the call to glTexSubImage2D
and that glDrawElements
(2) must use the data that has been uploaded between (1) and (2).
Yes, this involves tracking the contents, implicit data copies and a lot of other unpleasant things. Yes, this also likely implies that you're hitting a slow path.