Search code examples
c++openglglslraycastingglm-math

GLSL highlight mesh on mouse over


I am working on a raycaster for my opengl project and I want to be able to highlight the part of the mesh that I am hovering over.

I get a direction vector from my mouse position using this:

glm::vec3 Engine::Physics::RayCast::ViewToWorldSpace(glm::vec2 screenPos, 
float depth, glm::mat4 projection, glm::mat4 view, graphics::Window* window)
{
//Screen to Normalised Device Coordinates
float x = (2.0f * screenPos.x) / window->getWidth() - 1.0f;
float y = 1.0f - (2.0f * screenPos.y) / window->getHeight();
float z = 1.0f;
glm::vec3 ray_nds = glm::vec3(x, y, z);

//Normalised Device Coordinates to 4d Homogeneous Clip Coordinates
glm::vec4 ray_clip = glm::vec4(ray_nds.x, ray_nds.y, -1.0, 1.0);

//4d Homogeneous Clip Coordinates to Eye (Camera) Coordinates
glm::vec4 ray_eye = glm::inverse(projection) * ray_clip;
ray_eye = glm::vec4(ray_eye.x, ray_eye.y, -1.0, 0.0);

//Eye (Camera) Coordinates to 4d World Coordinates
glm::vec3 ray_wor = glm::inverse(view) * ray_eye;
ray_wor = glm::normalize(ray_wor);
return ray_wor;
}

I then want to check if I pointing at part of a mesh, so I pass the camera position and this direction vector to these shaders.

Vertex

#version 410

layout (location = 0) in vec3 vertex_position;
layout (location = 2) in vec2 VertexUV;

uniform mat4 P;
uniform mat4 V = mat4(1.0);
uniform mat4 M = mat4(1.0);
out vec2 uv;
out vec4 position;

uniform vec3 cam_pos;
out vec3 cameraPos;
uniform vec3 ray_dir;
out vec3 rayDir;

void main () {

rayDir = vec3(M * vec4(ray_dir,1.0));
cameraPos = vec3(V * M * vec4(cam_pos,1.0)); 
gl_Position = P * V * M * vec4(vertex_position, 1);
position = V * M * vec4(vertex_position, 1.0);
uv = VertexUV;
}

Frag

#version 410

out vec4 fragment_colour; // final colour of surface

in vec4 position;
in vec2 uv;

uniform vec3 light_pos;
uniform vec3 light_ambient;
uniform sampler2D texture2D;

in vec3 cameraPos;
in vec3 rayDir;

void main () {
vec4 test = vec4(0, 0, 0, 0);

vec3 vDir = normalize(position.xyz - cameraPos);
float cosAngle = dot(vDir, rayDir);
float angle = degrees(acos(cosAngle));

if(angle < 5)
{
    test = vec4(1, 0, 0, 1);
}


float intensity = (1.0 / length(position.xyz - light_pos))+0.25;
intensity = clamp(intensity, 0, 1);
vec4 ambient = vec4(light_ambient, 1);
fragment_colour = ((vec4(texture(texture2D, uv).rgb, 1.0) * intensity) * 
ambient)+test;
}

Currently I can see the highlight section sometimes at certain camera rotations (the camera orbits around the model) but it only ever really follows the mouse when the camera is facing directly down the -Z axis. Any idea what I am doing wrong?

Here is a gif of it working with the camera lined up, but remember it breaks if the camera moves.

Working at correct camera angle


Solution

  • In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.

    • Projection matrix:
      The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport.

    • View matrix:
      The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space.

    • Model matrix:
      The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions from of the mesh to the world space.


    You decided to do the operations in the fragment shader with view space coordinates.

    In the fragment shader the vertex position (position) is in view space, because you transformed it in the vertex shader. The vertices of the model are transformed to world space by the matrix M and transformed from world space to the view space by the matrix V:

    position = V * M * vec4(vertex_position, 1.0);
    


    But, the position coordinates of the camera (cam_pos), are world space coordinates. This means you have to transform it by the view matrix V, only.

    cameraPos = vec3( V * vec4(cam_pos, 1.0) ); 
    

    Note, the result of this operations is alway vec3(0.0, 0.0, 0.0), because the view space position of the camera is the origin of the view space. The eye position defines the origin of the view space system (matrix):

    cameraPos = vec3( 0.0, 0.0, 0.0 ); 
    


    Further, rayDir is a direction vector and not a point. This means you cannot transform it by a 4*4 matrix, because you don't want to apply the translation part of the matrix to it. In general you have to use the transpose inverse of the upper left 3*3 of the 4*4 matrix, when you transform a direction vector. But since the view matrix is orthogonal matrix it is sufficient to use the the upper left 3*3.
    rayDir is a direction in world space, so you have to transform it by the view matrix V, only.

    rayDir = mat3(V) * ray_dir;
    

    Note, if you omit the transformation of the ray from view space to world space, in the functionEngine::Physics::RayCast::ViewToWorldSpace:

    // glm::vec3 ray_wor = glm::inverse(view) * ray_eye; // skip this
    ray_eye = glm::normalize(ray_eye);
    return ray_eye;
    

    then you can omit the backwards transformation in the vertex shader too:

    rayDir = ray_dir;