Preallocating a vertex buffer in the GPU and then filling it vertex data and then drawing them to replicate legacy opengl functions like glVertex2f
, glNormal2f
etc and drawing shapes with them.
Sending the vertex data of all the primitive shapes to the GPU at once at the start of the program then drawing the appropriate part of it in the vertex shader when drawing the shape.
These are all the ways I could think of but I'm not sure how optimal either of these approaches are. Do games and game engines use a similar approach? or is there an even better approach to this?
Your two approaches are more alike than you pretend them to be. Both of them "preallocate a buffer on the GPU", both "fill it with data", and both have a vertex shader that will transform the data using an MVP matrix supplied from a uniform.
So far it seems that the only difference between your two approaches is that the 1st uploads your vertices every frame, while the 2nd uploads them only once.
If your models are indeed static -- then sure, by all means, go with the 2nd approach. If you have some sort of animation that cannot be accomplished in the vertex shader (e.g. you are drawing a GUI that reflows as the window size changes), then the 2nd approach is simply not applicable.
@CheeseMan69: How would your shapes move if the buffer isn't dynamic?
@YakovGalka I'll transform them with a matrix in the vertex shader
I guess you imagine having a single primitive in your buffer and then calling glUniform
and glDrawArrays
multiple times per frame? That would indeed be slow. You will be essentially submitting a different MVP matrix for each primitive. As a rule of thumb you want to draw many primitives with a single OpenGL call. To accomplish that you'll need to move your matrices to a UBO, SSBO, or an instanced vertex attribute. However an MVP matrix is going to take more bandwidth and computation than submitting the correct vertices of each primitive in the first place.