Search code examples
c++openglglslvertex-shadervertex-array-object

Get old style OpenGL code work in GLSL


I am trying to draw this pattern in OpenGL :

enter image description here

To get this, I created the pattern like :

vector< vector<DataPoint> > datas;
float Intensitytemp=0;
float xPos=0, yPos=0, angleInRadians=0;
for (float theta = 0.0f; theta < 4096; theta += 1.f)
{
    vector<DataPoint> temp;
    angleInRadians = 2 * M_PI*theta / 4096;
    for (float r = 0; r < 4096; r += 1.f)
    {
        xPos = cos(angleInRadians)*r / 4096;
        yPos = sin(angleInRadians)*r / 4096;
        Intensitytemp = ((float)((int)r % 256)) / 255;
        DataPoint dt;
        dt.x = xPos;
        dt.y = yPos;
        dt.Int = Intensitytemp;
        temp.push_back(dt);
    }
    datas.push_back(temp);
}

and I am drawing the pattern as :

glBegin(GL_POINTS);
    for (int x = 0; x < 4096; x++)
        for (int y = 0; y < 4096; y++)
        {
            xPos = datas[x][y].x;
            yPos = datas[x][y].y;
            Intensitytemp = datas[x][y].Int;
            glColor4f(0.0f, Intensitytemp, 0.0f, 1.0f);
            glVertex3f(xPos, yPos, 0.0f);
        }
glEnd();

If I create the data in glBegin()-glEnd() block,it is working faster. But in both cases,I believe that a better way is doing all in GLSL.I didn't understand the logic behind the modern OpenGL well.

I tried to create vertex buffer array and color arrays but could not get it work. The problem was not about transferring the arrays to graphics card.I am getting stackoverflows in arrays. This is question of another topic but here what I wonder is it possible to do this task in completely GLSL code ( those in .vert file ) without transferring these huge array to graphics card.


Solution

    1. render quad covering the screen

      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      
      GLint id;
      glUseProgram(prog_id);
      
      glMatrixMode(GL_PROJECTION);
      glLoadIdentity();
      glMatrixMode(GL_TEXTURE);
      glLoadIdentity();
      glMatrixMode(GL_MODELVIEW);
      glLoadIdentity();
      
      glDisable(GL_DEPTH_TEST);
      glDisable(GL_TEXTURE_2D);
      
      glBegin(GL_QUADS);
      glColor3f(1,1,1);
      glVertex2f(-1.0,-1.0);
      glVertex2f(-1.0,+1.0);
      glVertex2f(+1.0,+1.0);
      glVertex2f(+1.0,-1.0);
      glEnd();
      
      glUseProgram(0);
      glFlush();
      SwapBuffers(hdc);
      

      Also see See complete GL+GLSL+VAO/VBO C++ example on how to get GLSL working (even the new stuff)

      Do not forget to set your GL view to square area!

    2. in vertex shader pass the vertex coordinates to fragment

      no need for matrices... the pos is in range <-1.0,1.0> which is fine for fragment.

      // Vertex
      varying vec2 pos;
      void main()
          {
          pos=gl_Vertex.xy;
          gl_Position=gl_Vertex;
          }
      
    3. in fragment compute distance from the middle (0,0) and compute final color from it

      // Fragment
      varying vec2 pos;
      void main()
          {
          vec4 c=vec4(0.0,0.0,0.0,1.0);
          float r=length(pos);    // radius = distance to (0,0)
          if (r<=1.0)             // inside disc?
              {
              r=16.0*r;           // your range 16=4096/256
              c.g=r-floor(r);     // use only the fractional part ... %256
              }
          gl_FragColor=c;
          }
      

      Here the result:

      example output

    4. How GLSL works

      You can handle the fragment shader as a color computation engine for polygon filling. It works like this:

      The GL primitive is passed by GL calls to vertex shader which is responsible for transformations and pre-computing of constants. Vertex shader is called for each glVertex call from oldstyle GL.

      If supported primitive (set by glBegin in oldstyle GL) is fully passed (like TRIANGLE,QUAD,...) the gfx card start rasterization. This is done by HW interpolators calling fragment shader for each "pixel" to fill. As the "pixel" contains much more data then just color and also can be discarted ... it is called fragment instead. Its sole purpose is to compute target color of the pixel on the screen it represents. You can not change its position only the color. That is the biggest difference between old GL and GLSL approach. You can not change the shape or position of objects only how are they colored/shaded hence the name shaders. So if you need to generate specific pattern or effect you usually render some primitive covering the area involved by GL and recolor it by computation inside mostly fragment shader.

      Obviously the Vertex shader is not called as often as Fragment shader in most cases so move as much of the computations as you can to the Vertex shader to improve performance.

      Newer GLSL versions support also geometry and tesselation shaders but that is a chapter on its own and not important for you now. (you need to get used to Vertex/Fragment first).

    [Notes]

    Single if in such simple shader is not a big problem. The main speed increase is just in that you pass single quad instead of 4096x4096 points. The Shader code is fully parallelized by the gfx HW directly. That is why the architecture is how is ... limiting some capabilities of what can be done efficiently inside shader in comparison to standard CPU/MEM architectures.

    [Edit1]

    You can often avoid the if by clever math tricks like this:

    // Fragment
    varying vec2 pos;
    void main()
        {
        vec4 c=vec4(0.0,0.0,0.0,1.0);
        float r=length(pos);            // radius = distance to (0,0)
        r*=max(1.0+floor(1.0-r),0.0);   // if (r>1.0) r=0.0;
        r*=16.0;                        // your range 16=4096/256
        c.g=r-floor(r);                 // use only the fractional part ... %256
        gl_FragColor=c;
        }