Search code examples
javaopengllwjglnormalswavefront

Calculating normals for model lighting results in the model no longer rendering


I'm working on a simple render engine in java which can render OBJ files to the screen. I'm currently working on the lighting system which is used to light up the models which are present on the screen. Before introducing the lighting system I was able to load models to the screan easely:

enter image description here

however, when I add the light to the screen the model does no longer show up. I'm using the following shader to render the light:

VertexShader:

#version 150

in vec3 position;
in vec2 textureCoordinates;
in vec3 normals;

out vec2 passTextureCoordinates;
out vec3 surfaceNormal;
out vec3 toLightVector;

uniform mat4 transformationMatrixTextured;
uniform mat4 projectionMatrixTextured;
uniform mat4 viewMatrixTextured;
uniform vec3 lightLocation;

void main(void){
    vec4 worldPosition = transformationMatrixTextured * vec4(position,1.0);
    gl_Position = projectionMatrixTextured * viewMatrixTextured * worldPosition;
    passTextureCoordinates = textureCoordinates;

    surfaceNormal =  (transformationMatrixTextured * vec4(normals,0.0)).xyz;
    toLightVector =  lightLocation - worldPosition.xyz;
}

FragmentShader:

#version 150

in vec2 passTextureCoordinates;
in vec3 surfaceNormal;
in vec3 toLightVector;

out vec4 out_Color;

uniform sampler2D textureSampler;
uniform vec3 lightColor;

void main(void){

    vec3 unitNormal = normalize(surfaceNormal);
    vec3 unitLightVector = normalize(toLightVector);

    float nDot1 = dot(unitNormal, unitLightVector);
    float brightness = max(nDot1, 0.0);
    vec3 diffuse = brightness * lightColor;

    out_Color = vec4(diffuse, 1.0) * texture(textureSampler,passTextureCoordinates);

}

I've been using the tutorial series by ThinMatrix to help me with creating this program. However, the one big difference is that I also want to be able to load programmaticly created models, as uppose to only using models loaded by the OBJLoader. Because of this I had to create a way to calculate normals given a vertices array and an index array.

My approach to this problem was this:

/**
 * Sum.
 *
 * @param arg1 the arg 1
 * @param arg2 the arg 2
 * @return the vector 3 f
 */
public static Vector3f sum(Vector3f arg1, Vector3f arg2) {
    return new Vector3f(arg1.x + arg2.x, arg1.y + arg2.y, arg1.z + arg2.z);
}

/**
 * Subtract.
 *
 * @param arg1 the arg 1
 * @param arg2 the arg 2
 * @return the vector 3 f
 */
public static Vector3f subtract(Vector3f arg1, Vector3f arg2) {
    return new Vector3f(arg1.x - arg2.x, arg1.y - arg2.y, arg1.z - arg2.z);
}

/**
 * Cross product.
 *
 * @param arg1 the arg 1
 * @param arg2 the arg 2
 * @return the vector 3 f
 */
public static Vector3f crossProduct(Vector3f arg1, Vector3f arg2) {
    return new Vector3f(arg1.y * arg2.z - arg2.y * arg1.z, arg2.x * arg1.z - arg1.x * arg2.z, arg1.x * arg2.y - arg2.x * arg1.y);
}

/**
 * Gets the normals.
 *
 * @param vertices the vertices
 * @param indexes the indexes
 * @return the normals
 */
public static float[] getNormals(float[] vertices, int[] indexes) {
    vertices = convertToIndexless(vertices, indexes);
    Vector3f tmp;
    float[] tmpArray = new float[vertices.length / 3];
    int tmpArrayCounter = 0;
    for(int i = 0; i < vertices.length; i+=9) {
        Vector3f edge1 = subtract(new Vector3f(vertices[i], vertices[i + 1], vertices[i + 2]) , new Vector3f(vertices[i + 3], vertices[i + 4], vertices[i + 5]));
        Vector3f edge2 = subtract(new Vector3f(vertices[i], vertices[i + 1], vertices[i + 2]) , new Vector3f(vertices[i + 6], vertices[i + 7], vertices[i + 8]));

        tmp = crossProduct(edge1, edge2);
        tmpArray[tmpArrayCounter++] = tmp.getX();
        tmpArray[tmpArrayCounter++] = tmp.getY();
        tmpArray[tmpArrayCounter++] = tmp.getZ();
    }
    return tmpArray;
}

/**
 * Convert to indexless.
 *
 * @param vertices the vertices
 * @param indexes the indexes
 * @return the float[]
 */
private static float[] convertToIndexless(float[] vertices, int[] indexes) {
    float[] tmpArray = new float[indexes.length * 3];
    for(int i = 0; i < indexes.length; i++) {
        tmpArray[i * 3]     = vertices[indexes[i] * 3];
        tmpArray[i * 3 + 1] = vertices[indexes[i] * 3 + 1];
        tmpArray[i * 3 + 2] = vertices[indexes[i] * 3 + 2];
    }
    return tmpArray;
}

I've based this approach on this question about calculating normals. As I said before I'm not able to render the model when adding a light to the program. Am I doing something wrong in my calculations?


Solution

  • Long description to find out root problem

    Honestly I do not get what means the model does no longer show up. So I am going to post a few hints how you can find out what is going on.

    • Is the rendered model all black?

      • There could be several sources of problems which have to be checked:

        • Turn on some background color like blue to see what is the result.
          • Is the model invisible, so all is blue? Normals of the model are wrong. You can turn on to render both sides, keyword face culling to find out. If it renders with back-face rendering then the normals of the model are an issue
          • Is the model visible, but the model is rendered black and the background is blue?
            • either direction of the light is wrong (very likely)
            • I suggest to change the shader so there is always some minimum amount of light, so called ambient light. Then each object gets a minimum lightning, even though there is a bad angle with respect to the light source like intensity = max(dot(n, l), ambience); in the vertex shader with ambience as a parameter and n the normalized normal of the object and l the normalized light direction. In the fragment shader I used gl_FragColor = vec4(intensity*tc.r,intensity*tc.g,intensity*tc.b,tc.a); with tc being a vec4 texture coordinate. In this way the object always has some light
            • or some bug in the shader code (could not spot a problem there at first sight so, but who knows?) Well I used dot product of model normal to light direction for this, in your code there seems to be the cross product.
        • The texture is not used / accepted / assigned right to the model or the vector is returning only one pixel location which is all black
      • Is there an error in the shader code? Compilation error which is logged as an exception?

        • fix it ;)

    I guess the problem is a mixture of using the cross product and wrong light direction (I had the same problem in my model at the beginning)

    Edit One more comment to dot product: the dot product is what you need for finding out the intensity. The geometric definition of it dot(a,b)=length(a)*length(b)*cos(alpha) where alpha is the angle between a and b.

    • If the model normal is the same direction as the light direction then you want full intensity.

    • If the model normal is in orthogonal direction (90 degrees) as the light direction then you want 0 intensity (or ambience intensity)

    • If the model normal is 60 degrees in the direction of the light direction then you want half intensity (or ambience intensity)

    etc

    Edit 2 - because the dot product can have negative results now the max(dot(a,b),0) would clip this off (for opposite direction). For quick ambience you can change this to max(dot(a,b),0.3)

    Summary:

    • Use dot product for calculating intensity.

    • Use ambient light for keeping some light, even if the angle with respect to the light source is bad.