Search code examples
c++glslshadervulkanlight

How to fix incorrect Blinn-Phong lighting


I am trying to implement Blinn-Phong shading for a single light source within a Vulkan shader but I am getting a result which is not what I expect.

The output is shown below: Incorrect lighting

The light position should be behind to the right of the camera, which is correctly represented on the touri but not on the circle. I do not expect to have the point of high intensity in the middle of the circle.

The light position is at coordinates (10, 10, 10).

The point of high intensity in the middle of the circle is (0,0,0).

Vertex shader:

#version 450
#extension GL_ARB_separate_shader_objects : enable

layout(binding = 0) uniform MVP {
    mat4 model;
    mat4 view;
    mat4 proj;
} mvp;

layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;

layout(location = 0) out vec3 fragColor;
layout(location = 1) out vec2 fragTexCoord;
layout(location = 2) out vec3 Normal;
layout(location = 3) out vec3 FragPos;
layout(location = 4) out vec3 viewPos;

void main() {
    gl_Position = mvp.proj * mvp.view * mvp.model * vec4(inPosition, 1.0);
    fragColor = inColor;
    fragTexCoord = inTexCoord;
    Normal = inNormal;
    FragPos = inPosition;
    viewPos = vec3(mvp.view[3][0], mvp.view[3][1], mvp.view[3][2]);
}

Fragment shader:

#version 450
#extension GL_ARB_separate_shader_objects : enable

layout(binding = 1) uniform sampler2D texSampler;
layout(binding = 2) uniform LightUBO{
    vec3 position;
    vec3 color;
} Light;

layout(location = 0) in vec3 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 2) in vec3 Normal;
layout(location = 3) in vec3 FragPos;
layout(location = 4) in vec3 viewPos;

layout(location = 0) out vec4 outColor;

void main() {
    vec3 color = texture(texSampler, fragTexCoord).rgb;
    // ambient
    vec3 ambient = 0.2 * color;

    // diffuse
    vec3 lightDir = normalize(Light.lightPos - FragPos);
    vec3 normal = normalize(Normal);
    float diff = max(dot(lightDir, normal), 0.0);
    vec3 diffuse = diff * color;

    // specular
    vec3 viewDir = normalize(viewPos - FragPos);
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = 0.0;
    vec3 halfwayDir = normalize(lightDir + viewDir);  
    spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);

    vec3 specular = vec3(0.25) * spec;
    outColor = vec4(ambient + diffuse + specular, 1.0);
}

Note: I am trying to implement shaders from this tutorial into Vulkan.


Solution

  • This would seem to simply be a question of using the right coordinate system. Since some vital information is missing from your question, I will have to make a few assumptions. First of all, based on the fact that you have a model matrix and apparently have multiple objects in your scene, I will assume that your world space and object space are not the same in general. Furthermore, I will assume that your model matrix transforms from object space to world space, your view matrix transforms from world space to view space and your proj matrix transforms from view space to clip space. I will also assume that your inPosition and inNormal attributes are in object space coordinates.

    Based on all of this, your viewPos is just taking the last column of the view matrix, which will not contain the camera position in world space. Neither will the last row. The view matrix transforms from world space to view space. Its last column corresponds to the vector pointing to the world space origin as seen from the perspective of the camera. Your FragPos and Normal will be in object space. And, based on what you said in your question, your light positions are in world space. So in the end, you're just mashing together coordinates that are all relative to completely different coordinate systems. For example:

    vec3 lightDir = normalize(Light.lightPos - FragPos);
    

    Here, you're subtracting an object space position from a world space position, which will yield a completely meaningless result. This meaningless result is then normalized and dotted with an object-space direction

    float diff = max(dot(lightDir, normal), 0.0);
    

    Also, even if viewPos was the world-space camera position, this

    vec3 viewDir = normalize(viewPos - FragPos);
    

    would still be meaningless since FragPos is given in object-space coordinates.

    Operations on coordinate vectors only make sense if all the vectors involved are relative to the same coordinate system. It doesn't really matter so much which coordinate system you choose. But you have to pick one. Make sure all your vectors are actually relative to that coordinate system, e.g., world space. If some vectors do not already happen to be in that coordinate system, you will have to transform them into that coordinate system. Only once all your vectors are in the same coordinate system, your shading computations will be meaningful…

    To get the viewPos, you could take the last column of the inverse view matrix (if you happened to already have that somewhere for some reason), or simply pass the camera position as an additional uniform. Also, rather than multiply the model view and projection matrices again and again, once for every single vertex, consider just passing a combined model-view-projection matrix to the shader…

    Apart from that: Note that you will most likely only want to have a specular component if the surface is actually oriented towards the light.