Search code examples
copenglgame-engineglutopengl-compat

Optimising the rendition of quads using GLUT?


I'm making a 3d voxel engine, similar to Minecraft. I have some world generation and chunk logic working, however when I have a render distance of 12 chunks (which seems fairly typical for Minecraft for example), (2*12*12*16*16*2), I see there is a potential of being upwards of 150,000 faces needing to be rendered. Before trying to optimize the engine as a whole, I run a test where I just rendered one face 150,000 times. In theory, as the points aren't having to be calculated in 3d space each time, this task should really the least computationally expensive rendition the engine would have to make. Nevertheless, running the following

glBegin(GL_QUADS);
glColor3f(1, 0, 0);
for (int i = 0; i < 150000;i++) {
    glVertex3fv(renderp1);
    glVertex3fv(renderp2);
    glVertex3fv(renderp3);
    glVertex3fv(renderp4);
}
glEnd();

Even when theres no texture and the points are all the same, I still get a very shabby fps, which makes the engine unusable.

I know modern games have meshes with upwards of 100,000 polygons, and run fantastically. Which makes me wonder how this code here is so slow? Is rendering using this technique a horrible way to go about doing this? How could i achieve such a render?


Solution

  • The first thing you should do (before any of the shader stuff) is to stop using glBegin/glEnd and start using glDrawArrays or glDrawElements.

    e.g.

    // define data structures
    
    struct vec3 { GLfloat x, y, z; };
    struct vertex_t {
        vec3 position, color;
    };
    
    // define data (just a single triangle with RGB colors)
    static const vertex_t vertices[] = {
        { { 0.0f,  0.5f, 0.0f }, { 1, 0, 0 } },
        { { 0.5f, -0.5f, 0.0f }, { 0, 1, 0 } },
        { { -0.5f, -0.5f, 0.0f }, { 0, 0, 1 } }
    };
    
    ...
    
    // setup the arrays
    glVertexPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, position) );
    glColorPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, color) );
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_COLOR_ARRAY);
    
    ...
    
    // draw
    glDrawArrays(GL_TRIANGLES, 0, 3);
    

    This is OpenGL 1.1 stuff. It can be further improved with VBO (Vertex Buffer Object), which requires OpenGL 1.5

    GLuint vbo;
    glGenBuffers(1, &vbo);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
    // now replace vertices with nullptr in calls to glVertexPointer and glColorPointer
    

    and we haven't even touched shaders yet.

    For OpenGL 2.x style shaders, the above code can stay as is, or you can further "modernize" it by replacing glVertexPointer/glColorPointer with glVertexAttribPointer, and glEnableClientState with glEnableVertexAttribArray, to make "modern OpenGL" people happy.

    But just using glDrawArrays, OpenGL 1.1 style, should be enough to resolve the performance problem. This way you don't call glVertex/glColor 100000 times, but a single call to glDrawArrays can draw 100000 vertices at once (and if using VBO, they're already in the GPU memory).

    And oh, quads are deprecated since 3.0. We're supposed to build everything from triangles.