Search code examples
glsltraceraytracing

Ray tracing a sphere inside a cube


I am trying to ray trace a sphere inside a cube. The cube is simply constructed out of 12 triangles with normals.

The cube has unit coordinates and unit normals. So within its local space (between -1 and 1), there should be a sphere of radius 0.5.

So I thought I should calculate the ray in the vertex shader: the ray origin is the interpolated vertex position, the ray direction is the vertex normal (or its opposite direction but that shouldn't matter I think). Interpolation should do the rest.

Then in the fragment shader, I should then calculate the ray-sphere intersection points and if there is any, change the color of the fragment.

On the front and back side of the cube, the result seems to be correct, but on the left, right, top and bottom sides, the result seems to be coming from the wrong angle. I should see the sphere in the middle all the time and that is not the case on those sides.

Can someone tell me what I am doing wrong?

Here is the shader code:

Vertex shader:

#version 400

layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNor;

uniform mat4 uProj;
uniform mat4 uView;
uniform mat4 uModel;

out vec3 vRayPos;
out vec3 vRayDir;

void main(void)
{
  gl_Position = uProj * uView * uModel * vec4(aPos, 1);
  vRayPos = aPos;
  vRayDir = inverse(mat3(uModel)) * aNor;
}

Fragment shader:

#version 400

in vec3 vRayPos;
in vec3 vRayDir;

out vec4 oFrag;

void main(void)
{
  const vec3 sphereCenter = vec3(0, 0, 0);
  const float sphereRadius = 0.5;

  vec3 rayPos = vRayPos;
  vec3 rayDir = normalize(vRayDir);
  float a = dot(rayDir, rayDir); // TODO: rayDir is a unit vector, so: a = 1.0?
  float b = 2 * dot(rayDir, (rayPos - sphereCenter));
  float c = dot(rayPos - sphereCenter, rayPos - sphereCenter) - sphereRadius * sphereRadius;
  float d = b * b - 4 * a * c;
  float t = min(-b + sqrt(max(0, d)) / 2, -b - sqrt(max(0, d)) / 2);

  vec3 color = (1.0 - step(0, d)) * vec3(0.554, 0.638, 0.447) + step(0, d) * abs(t) * vec3(0.800, 0.113, 0.053);

  oFrag = vec4(color, 1);
}

Notes: The factor t is actually not necessary, but it gives an idea of how far away from the side the ray touches the sphere which gives it a shady look. The step(0, d) function is used to see if there are any intersection points and the max(0, d) is used to prevent the shader from halting on a sqrt(<0) fault, both to prevent code branching.

Reference: I got the calculations from https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection

Edit: here is a video of the problem: Video


Solution

  • Your rays should be calculated by taking the direction between a given fragment and the camera position. (In view space, that would be the origin.) The vertex normals have absolutely nothing to do with it.

    You can technically calculate rays in the vertex shader and pass it to the fragment shader as an interpolant. However, this has the potential to give incorrect results since the output will be linear, which is incorrect.

    A better approach is to output the view space position of your vertex in the vertex shader. In the fragment shader, calculate a ray from the origin to the fragment's view space position. Then, perform your ray intersection tests using that ray. The rasterizer will correctly interpolate the view space position. You could also calculate that yourself in the fragment shader, but the hardware is pretty good at this so it makes sense to let it do that for you.

    Having said all of that, the major issue with your current implementation is using the vertex normals to calculate rays. That's wrong. All you need is the camera position and the fragment position. If you look carefully at your video, you'll see that the same thing is being drawn on all sides, regardless of position relative to the camera.

    For a simple sphere, all you need is the camera-fragment ray. Calculate the distance from the line containing that to the center of the sphere. If it's less than the radius of the sphere, it's a hit.