Search code examples
openglopengl-3

Reducing buffer object size


What i have now

#define QUAD_VERT_COUNT 4

#define QUAD_POS_COMP 3

typedef struct quad_pos
{
 GLfloat x, y, z;
}quad_pos;

#define SIZE_QUAD_POS = sizeof(quad_pos) * QUAD_VERT_COUNT

static QUAD_BUFFER = 0;

void init_quad_buffer()
{
 quad_pos* pos_data = malloc(SIZE_QUAD_POS);

 pos_data[0].x = -1.0f;
 pos_data[0].y = -1.0f;
 pos_data[0].z = 0.0f;

 pos_data[1].x = 1.0f;
 pos_data[1].y = -1.0f;
 pos_data[1].z = 0.0f;

 pos_data[2].x = -1.0f;
 pos_data[2].y = 1.0f;
 pos_data[2].z = 0.0f;

 pos_data[3].x = 1.0f;
 pos_data[3].y = 1.0f;
 pos_data[3].z = 0.0f;

 QUAD_BUFFER = create_buffer(GL_ARRAY_BUFFER, GL_STATIC_DRAW, pos_data, SIZE_QUAD_POS);
 free(pos_data);
}

void get_quad_buffer
{
  return QUAD_BUFFER;
}

And drawning (part of it)

glBindBuffer(GL_ARRAY_BUFFER, get_quad_buffer());
glEnableVertexAttribArray(ss->attrib[0]);//attrib[o] is vertex pos
glVertexAttribPointer(ss->attrib[0], QUAD_POS_COMP, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, QUAD_VERT_COUNT);

Scaling, translating and rotating achieved with matrices and shaders, so yes this buffer never changes for every sprite.

But why we need to use GL_float for just -1.0, 1.0? GL_Byte will be enough.

 typedef struct quad_pos
 {
 GLbyte x, y, z;
 }quad_pos;

void init_quad_buffer()
{
 quad_pos* pos_data = malloc(SIZE_QUAD_POS);

 pos_data[0].x = -1;
 pos_data[0].y = -1;
 pos_data[0].z = 0;
 ....
}

Drawning

  ...
  glVertexAttribPointer(ss->attrib[0], QUAD_POS_COMP, GL_BYTE, GL_FALSE, 0, 0);
  glDrawArrays(GL_TRIANGLE_STRIP, 0, QUAD_VERT_COUNT);

Question 1: Do i need normalize set to GL_TRUE?
Question 2: GLclampf and GLfloat both 4 byted floats, but color values are from 0.0 to 1.0 so if i put them in GLbyte too (val/256, so 255 for 1.0, 128 for 0.5, 0 for 0) do i need GL_TRUE for normalize in glVertexAttribPointer?
Question 3: Do i really need padding in vertex data/other data? Adding fictitious pos_data.g, just for sizeof(pos_data) = 16 == Good for gpu?


Solution

  • But why we need to use GL_float for just -1.0, 1.0? GL_Byte will be enough.

    Please note this is not true in general, in most cases you will need a float for precision. And if you only have so few values and so simple geometry, the odds are quite high that there even isn't a reason at all to optimize it to glByte in the first place. You likely have very few vertices at all, so why would you want to save storage on them? This sounds like a very good example of premature optimization (I know, it's an overused term).

    Now, for your actual questions:

    1. No, not if you want the same functionality, if normalize is false, the -1 will convert to -1.0f, if it is true it will be more something like -0.0078125f (or -1/128.0f). So if you want to keep the same scale, you don't want it normalized.
    2. Where do you get the idea that GLclampf and GLfloat are 8-byte floats? They are usually 4 byte floats. If you want to pass in RGB colors through vertex attributes, yes you should normalize them as OpenGL expects color components to be in the range [0.0f,1.0f]. But again: why don't you simply pass them as floats? What do you think to gain? In a simple game you probably have not enough colors to notice the difference and in a non-simple game you're more likely to be using textures.
    3. Of this I am not sure. I know it was true for old GPU's (and I mean almost 10y back), but I don't know of any recent claims that this would actually improve something. And in any case, the best-known alignment was to prop all vertex-attributes for one vertex together into (a multiple of) 32 bytes, and that was for ATI cards. Byte alignment might be necessary for some trickier things/extensions, but I do not think you need to worry about it just yet.