Search code examples
mathopenglglslprojection

Projection theory? (Implemented in GLSL)


OpenGL 3.x, because I don't want to be to far behind in tech.

First of all, yes I know it's a lot. I am almost certain that the vec3 transform(vec3) function is fine, If nothing else I know that it does'nt contain the problem I'm coming here for.

The bit of code I'm having problems with is (or should be) in the vec3 project(vec3) function. If I'm looking directly at, say, a box, it looks fine. If I turn the camera a bit so the box would be closer to a side of the screen (periferal vision), my happy box start's becoming a rectangle. While that is something I could live with for the game I'm putting it into, it's annoying.

The basic theory behind the projection is: You have the point (x, y, z), you find the angles between that and the origin (where the camera is) and project it to a plane that is nearz distance out. Finding the angles is a matter of angleX = atan(x/z) and angleY = atan(y/z). and using those two angles, you project them onto the near plane by doing point = tan(angle) * nearz. You then find the outer ridge of the screen by edgeY = tan(fovy) * nearz and edgeX = tan(fovy * aspect) * nearz. Finding the screen point using screen = point/edge

As a basic optimization I had, but removed in an attempt to fix this was to just take screen = angle/fov

Anything wrong with the theory of my projection function? Here's the implimentation:

#version 330

uniform vec3 model_location = vec3(0.0, 0.0, 0.0);
uniform vec3 model_rotation = vec3(0.0, 0.0, 0.0);
uniform vec3 model_scale    = vec3(1.0, 1.0, 1.0);

uniform vec3 camera_location = vec3(0.0, 0.0, 0.0);
uniform vec3 camera_rotation = vec3(0.0, 0.0, 0.0);
uniform vec3 camera_scale    = vec3(1.0, 1.0, 1.0);

uniform float fovy   =   60.0;
uniform float nearz  =    0.01;
uniform float farz   = 1000.0;
uniform float aspect =    1.0;

vec3 transform(vec3 point)
{
    vec3 translate = model_location - camera_location;
    vec3 rotate    = radians(model_rotation);
    vec3 scale     = model_scale    / camera_scale;

    vec3 s = vec3(sin(rotate.x), sin(rotate.y), sin(rotate.z));
    vec3 c = vec3(cos(rotate.x), cos(rotate.y), cos(rotate.z));

    float sy_cz = s.y * c.z;
    float sy_sz = s.y * s.z;
    float cx_sz = c.x * s.z;

    vec3 result;
    result.x = ( point.x * ( ( c.y * c.z ) * scale.x ) ) + ( point.y * ( ( ( -cx_sz )    + ( s.x * sy_cz ) )    * scale.y ) ) + ( point.z * ( ( ( -s.x * s.z ) + ( c.x * sy_cz ) ) * scale.z ) ) + translate.x;
    result.y = ( point.x * ( ( c.y * s.z ) * scale.y ) ) + ( point.y * ( ( ( c.x * c.z ) + ( s.x * sy_sz ) )    * scale.y ) ) + ( point.z * ( ( ( -s.x * c.z ) + ( c.x * sy_sz ) ) * scale.z ) ) + translate.y;
    result.z = ( point.x * ( ( -s.y )      * scale.x ) ) + ( point.y * ( (   s.x * c.y )                        * scale.y ) ) + ( point.z * ( (    c.x * c.y )                     * scale.z ) ) + translate.z;

    return result;
}

vec4 project(vec3 point)
{
    vec4 result = vec4(0.0);

    vec3 rotation = radians(camera_rotation);

    result.x = ( atan(point.x/point.z) - rotation.y );
    result.y = ( atan(point.y/point.z) - rotation.x );
    result.z = point.z/(farz - nearz);
    result.w = 1.0;

    result.x = tan(result.x) * nearz;
    result.y = tan(result.y) * nearz;

    vec2 bounds = vec2( tan(fovy * aspect) * nearz,
                        tan(fovy) * nearz );

    result.x = result.x / bounds.x;
    result.y = result.y / bounds.y;

    if (camera_rotation.z == 0)
        return result;

    float dist = sqrt( (result.x*result.x) + (result.y*result.y) );
    float theta = atan(result.y/result.x) + rotation.z;
    result.x = sin(theta) * dist;
    result.y = cos(theta) * dist;

    return result;
}

layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec2 texCoord;

out vec2 uvCoord;

void main()
{
    uvCoord = texCoord;
    vec4 pos = project( transform(vertex_position) );

    if (pos.z < 0.0)
        return;
    gl_Position = pos;
}

To answer a few anticipated questions:

Q: Why not use GLM/some-other-mathimatics-lib?

A:

-I tryed a while ago. My "Hello world!" triangle was stuck in the center of the screen. using transformation matrics did'nt move it back forward, scale it, or anything.

-Because learning how to figure thing's out for you're self is important. Doing this mean's that I learn how to tackle thing's like this, while still having something to fall back on if everything get's out-of-hand. (there's this fool's justification.)

Q: Why not use matrics?

A:

-Those hate me too.

-I'm doing it in a new way, If I used matrics then I would be doing it exactly how every tutorial say's to do it, instead of figuring it out for my-self.

Tryed sources:

http://ogldev.atspace.co.uk/index.html

http://www.swiftless.com/opengltuts/opengl4tuts.html

Copyed letter for letter the GLSL shader's (vert & frag) out of the "OpenGL Shading Language Third Edition", "Emulating the OpenGL Fixed Functionality" on pg. 288-293

Tryed each multiple times, and tinkered with each to the point of insanity. Trying to program a war game, I got a wire-frame box to project into the peace symbol with one.

Edit:

The problem turned out to be the use of polar coord's as pointed out by datenwolf. A better equasion for the sake of projection using less advanced math was: c = zNear * (p.x/p.y) taken from the idea of the two triangles, the projected points triangle and the given points triangle, being preportional; and as a result using the same angle.

assuming X and Y are given for the point that is to be projected, and their preportional triangles sidese are labeled A and C respectivly. You can take the equasion's atan(Y/X) = angle and atan(C/A) = angle and from that atan(Y/X) = atan(C/A) which then becomes Y/X = C/A and finishing with C = A * (Y/X) where A is the distance to the near plane. and C is the screen coord in the Y direction.


Solution

  • -Those hate me too.

    Matrices are your friends. Learn to use them.

    -I'm doing it in a new way, If I used matrics then I would be doing it exactly how every tutorial say's to do it, instead of figuring it out for my-self.

    Your way is bad. Transformations don't commute and you're locking yourself into a very rigid framework. Also this:

       result.x = ( point.x * ( ( c.y * c.z ) * scale.x ) ) + ( point.y * ( ( ( -cx_sz )    + ( s.x * sy_cz ) )    * scale.y ) ) + ( point.z * ( ( ( -s.x * s.z ) + ( c.x * sy_cz ) ) * scale.z ) ) + translate.x;
       result.y = ( point.x * ( ( c.y * s.z ) * scale.y ) ) + ( point.y * ( ( ( c.x * c.z ) + ( s.x * sy_sz ) )    * scale.y ) ) + ( point.z * ( ( ( -s.x * c.z ) + ( c.x * sy_sz ) ) * scale.z ) ) + translate.y;
       result.z = ( point.x * ( ( -s.y )      * scale.x ) ) + ( point.y * ( (   s.x * c.y )                        * scale.y ) ) + ( point.z * ( (    c.x * c.y )                     * scale.z ) ) + translate.z;
    

    effectively is a rotation matrix multiplication followed by a translation matrix, written in a overly complicated an error prone way. Also you're wasting precious GPU resources.

    Your projection function seems to implement some kind of spherical projection model. This is problematic, because spherical coordinates are curvilinear. As long as your primitives are small, compared to the curvature things will work out. But as soon as a primitive gets larger all hell breaks loose, because primitive edges will be drawn as straight lines on the screen, while they would have to be curves, if you transform by a curvilinear coordinate system. You'd need at least a few tesselation shaders and iterative vertex adjustment to make this work.