Search code examples
javac++opengllwjgl

Use of Hexadecimal colours in OpenGL


I've got a theoretical question: Instead of using the standard RGBA colour model in OpenGL array buffers, why not just cut out 12 extra bytes per vertex and replace it with 1 integer that contains the colour in a hexadecimal format? Then in the shader you can convert that back into a vec4.

So what I mean is, replace this:

glVertexAttribPointer(COL_INDEX, 4, GL_FLOAT, false, stride, offset);

To this:

glVertexAttribPointer(COL_INDEX, 1, GL_INT, false, stride, offset);

I've tried to look everywhere online and I haven't found anything about this. Also, are there any performance benefits to saving those 12 bytes? Thanks for you time.


Solution

  • First of all, using 4*4 as size argument for glVertexAttribPointer is invalid. Valid sizes are 1 to 4, and 4 is exactly what you want here, as you need 4 channels to encode RGBA vectors.

    The second version is not really useful. If we ignore the bogus 1*4 in the same way and use only one channel, it could work in principle, but glVertexAttribPointer cannot be used to set up integer attributes. What this would do is setting up a float attribute in the shader, while the data format is the buffer is GL_INT. You will lose some precision here, as you can't represent all 2^32 integer values exactly as 32bit floats, which will totally screw up the results.

    You could use glVertexAttribIPointer (note the I in the middle) to set up an int attribute.

    However, you don't need such a complex operation. All you need is

    glVertexAttribPointer(COL_INDEX, 4, GL_UNSIGNED_BYTE, GL_TRUE, stride, offset)
    

    to use 4 separate bytes as a vec4 attribute which is automatically normalized to the [0,1] range when accessed in the shader (this is the purpose of setting the normalize argument of this function to GL_TRUE).