Search code examples
c++openglshaderdrawegl

Unable to determine plane orientation for Opengl draw


Following is the part of code that I am using to draw a rectangle. And I can see the rectangle on the display but confused with the quadrants and co-ordinates on display plane.

int position_loc = glGetAttribLocation(ProgramObject, "vertex");
int color_loc = glGetAttribLocation(ProgramObject, "color_a");
GLfloat Vertices[4][4] = {
             -0.8f,  0.6f, 0.0f, 1.0f,
             -0.1f,  0.6, 0.0f, 1.0f,
             -0.8f,  0.8f, 0.0f, 1.0f,
             -0.1f,  0.8f, 0.0f, 1.0f
        };
GLfloat red[4] = {1, 0, 1, 1};
glUniform4fv(glGetUniformLocation(ProgramObject, "color"), 1, red);
PrintGlError();
glEnableVertexAttribArray(position_loc);
PrintGlError();
printf("\nAfter Enable Vertex Attrib Array");
glBindBuffer(GL_ARRAY_BUFFER, VBO);
PrintGlError();
glVertexAttribPointer(position_loc, 4, GL_FLOAT, GL_FALSE, 0, 0);
PrintGlError();
glBufferData(GL_ARRAY_BUFFER, sizeof Vertices, Vertices, GL_DYNAMIC_DRAW);
PrintGlError();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
PrintGlError();

So keeping in mind the above vertices

GLfloat Vertices[4][4] = {
x,  y, p, q,
x1,  y1, p1, q1,
x2,  y2, p2, q2,
x3,  y3, p3, q3,
};

what is p,q .. p1,q1.. ? on what basis are these last two points determined? And how does it effect x,y or x1,y1 .. and so on?


Solution

  • OpenGL works with a 3-dimensional coordinate system with a homogeneous coordinate. Usually the values are donated [x,y,z,w] with w being the homogeneous part. Before any projection, [x,y,z] describe the position of the point in 3D space. w will usually be 1 for positions and 0 for directions.

    During rendering, OpenGL handles transformations (vertex shader) resulting in a new point [x', y', z', w']. The w component is needed here because it allows us to describe all transformations, especially translations and (perspective) projections as 4x4 matrices. Have a look at 1 and 2 for details about transformations.

    Afterwards clipping happens and the resulting vectors gets divided by the w component giving so-called Normalized device coordinates [x'/w', y'/w', z'/w', 1]. This NDC coordinates is what is actually used to draw to the screen. The first and second component (x'/w' and y'/w') are multiplied by the viewport size to get to the final pixel coordinates. The third component (z'/w', aka depth) is used to determine which points are in front during depth-testing. The last coordinate has no purpose here anymore.

    In your case, without using any transformations or projections, you are drawing directly in NDC space, thus z can be used to order triangles in depth and w always has to be 1.