I know Normalized Device Coordinates a little and I know when I use float between -1.0 and 1.0, I can get the output.
However, when I want to use integers as vertex's position attribute, I can't get any rendering output. I have tried to use GL_INT
and GL_TRUE
in glVertexAttribPointer
but it doesn't work.
eg.
std::vector<GLint> vex =
{
0, 0, 0,
4, 5, 0
};
glBufferData(GL_ARRAY_BUFFER, vex.size() * sizeof(GLint), vex.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_INT, GL_TRUE, 3 * sizeof(GLint), (void*)0);
// in the render loop
glBindVertexArray(VAO);
glDrawArrays(GL_LINES, 0, 2);
I use a basic vertex shader as follow:
#version 330 core
layout (location = 0) in vec3 aPos;
void main()
{
gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}
I considered the GL_TRUE
will normalize the integer positions into [-1.0, 1.0].
Maybe I ignored something important. So how can I render my point using integer coordinates correctly?
About glVertexAttribPointer()
I have read this reference while I still can't get what I want.
However, when I want to use integers as vertex's position attribute, I can't get any rendering output.
You get not output, because the two points are too close together.
Signed normalized fixed-point integers represent numbers in the range [−1, 1]. The conversion from a signed normalized fixed-point value c to the corresponding floating-point value f is performed using
f = max( c / (2^(b-1) - 1), -1.0 )
(c
is the integral value, and b
is the number of bits in the integral data format)
This means, for the data type GLint
, 4 results in floating point 0.000000001862645 and 5 results in floating point 0.000000002328306.
Test your code, by using GLbyte
instead of GLint
. The following code results in a diagonal line across the viewport:
std::vector<GLbyte> vex = { 127, 127, 0,
-128, -128, 0 };
glBufferData(GL_ARRAY_BUFFER, vex.size() * sizeof(GLbyte), vex.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_BYTE, GL_TRUE, 0, (void*)0);