Search code examples
c++openglqtopengl

OpenGL: Multiple Rending methods .. When to use which?


I am new to OpenGL, and I am following multiple tutorials, I noticed that there are multiple methods that are used to render objects, but I still don't get the difference between them and when to use each of them?

for example .. I am following this example that used shaders to render a cube and when I tried to render it using the "normal" way - if this is a correct expression. nothing is getting rendered. I always need to call shaderProgram.setAttributeArray(), shaderProgram.enableAttributeArray() and shaderProgram.disableAttributeArray()

But if I tried to render it directly using the other way - again, if this is a correct expression- using glBegin() and glEnd(). nothing is working

Also, I have another question shader concept itself, I don't really understand when should I use it and when should not

Here is my example:

#include "glwidget.h"

GlWidget::GlWidget(QWidget *parent)
    : QGLWidget(QGLFormat(/* Additional format options */), parent)
{
    alpha = 25;
    beta = -25;
    distance = 2.5;
}

void GlWidget::initializeGL()
{
    glEnable(GL_DEPTH_TEST);
    glEnable(GL_CULL_FACE);

    qglClearColor(QColor(Qt::white));

    shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertexShader.vsh");
    shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragmentShader.fsh");
    shaderProgram.link();

    vertices << QVector3D(-0.5, -0.5,  0.5) << QVector3D( 0.5, -0.5,  0.5) << QVector3D( 0.5,  0.5,  0.5) // Front
             << QVector3D( 0.5,  0.5,  0.5) << QVector3D(-0.5,  0.5,  0.5) << QVector3D(-0.5, -0.5,  0.5)
             << QVector3D( 0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5, -0.5) << QVector3D(-0.5,  0.5, -0.5) // Back
             << QVector3D(-0.5,  0.5, -0.5) << QVector3D( 0.5,  0.5, -0.5) << QVector3D( 0.5, -0.5, -0.5)
             << QVector3D(-0.5, -0.5, -0.5) << QVector3D(-0.5, -0.5,  0.5) << QVector3D(-0.5,  0.5,  0.5) // Left
             << QVector3D(-0.5,  0.5,  0.5) << QVector3D(-0.5,  0.5, -0.5) << QVector3D(-0.5, -0.5, -0.5)
             << QVector3D( 0.5, -0.5,  0.5) << QVector3D( 0.5, -0.5, -0.5) << QVector3D( 0.5,  0.5, -0.5) // Right
             << QVector3D( 0.5,  0.5, -0.5) << QVector3D( 0.5,  0.5,  0.5) << QVector3D( 0.5, -0.5,  0.5)
             << QVector3D(-0.5,  0.5,  0.5) << QVector3D( 0.5,  0.5,  0.5) << QVector3D( 0.5,  0.5, -0.5) // Top
             << QVector3D( 0.5,  0.5, -0.5) << QVector3D(-0.5,  0.5, -0.5) << QVector3D(-0.5,  0.5,  0.5)
             << QVector3D(-0.5, -0.5, -0.5) << QVector3D( 0.5, -0.5, -0.5) << QVector3D( 0.5, -0.5,  0.5) // Bottom
             << QVector3D( 0.5, -0.5,  0.5) << QVector3D(-0.5, -0.5,  0.5) << QVector3D(-0.5, -0.5, -0.5);
}

void GlWidget::resizeGL(int width, int height)
{
    if (height == 0) {
        height = 1;
    }

    pMatrix.setToIdentity();
    pMatrix.perspective(60.0, (float) width / (float) height, 0.001, 1000);

    glViewport(0, 0, width, height);
}

void GlWidget::paintGL()
{
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    QMatrix4x4 mMatrix;
    QMatrix4x4 vMatrix;

    QMatrix4x4 cameraTransformation;
    cameraTransformation.rotate(alpha, 0, 1, 0);
    cameraTransformation.rotate(beta, 1, 0, 0);

    QVector3D cameraPosition = cameraTransformation * QVector3D(0, 0, distance);
    QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);

    vMatrix.lookAt(cameraPosition, QVector3D(0, 0, 0), cameraUpDirection);

    shaderProgram.bind();
    shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);

    // This code is able to draw the cube
    shaderProgram.setAttributeArray("vertex", vertices.constData());
    shaderProgram.enableAttributeArray("vertex");
    glDrawArrays(GL_TRIANGLES, 0, vertices.size());
    shaderProgram.disableAttributeArray("vertex");
    // end

    // This code is never able to draw the cube or anything
    glBegin(GL_TRIANGLES);
    for (int var = 0; var < vertices.size(); ++var) {
        glVertex3f(vertices[var][0],vertices[var][1],vertices[var][2]);
    }
    glEnd();
    // end

    shaderProgram.release();
}

Solution

  • OpenGL used to have what is called "immediate mode". In it, you would use glBegin() and glEnd() and, between them, specify your data (points, normals, texture coordinates) point by point. You would do this on every frame, so obviously this is very slow. This functionality has long been deprecated, but most graphics card drivers still support it so as not to break existing software. However, if you want to learn modern OpenGL, I would ignore any tutorial that has glBegin() in it. Today, you transfer the data to the GPU in one go (into something called Vertex Buffer Object), then draw with one command (using a Vertex Array Object)

    Your other question was about shaders. Again, in the old days, OpenGL used to have a fixed-function pipeline. That means that you only supply vertex (normal, ...) data and the graphics card goes on and does its thing. You could not modify what it does with the data. In the modern world, some parts of the pipeline are programmable, meaning that you can change what some parts of the pipeline do (by supplying your own programs - shaders). This is very useful, as there are many effect that would be impossible to achieve otherwise. Again, if you don't supply your own shaders, the graphics card will mostly fall back into a default implementation for compatibility reasons. But you should definitely write your own shaders (basic ones are just a couple of lines).

    All in all, if you start learning modern OpenGL (VBOs, VAOs, shaders), it might take a little longer to get through the basics, but if you start learning legacy stuff, one day you'll have to leave it and start over from the beginning learning modern OpenGL.

    Edit: It's usually not a good idea to mix modern and legacy code. You might get it working, but it's just not worth the pain.