Search code examples
raytracing

How is the ray_color(...) (ray trace function) correct from the "Ray Tracing in One Weekend: Part 1" book?


The trace ray (named ray_color in the book) function from the ray tracing in a weekend book looks like this

color ray_color(const ray& r, int depth, const hittable& world) const {
        // If we've exceeded the ray bounce limit, no more light is gathered.
        if (depth <= 0)
            return color(0,0,0);

        hit_record rec;

        if (world.hit(r, interval(0.001, infinity), rec)) {
            ray scattered;
            color attenuation;
            if (rec.mat->scatter(r, rec, attenuation, scattered))
                return attenuation * ray_color(scattered, depth-1, world);
            return color(0,0,0);
        }

        vec3 unit_direction = unit_vector(r.direction());
        auto a = 0.5*(unit_direction.y() + 1.0);
        return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
    }

which I suspect does not make sense for one specific reason.

Let depth=1 when we start shooting our initial ray through a pixel. Let's say we hit an object and thus we arrive to if(rec.mat->scatter(...)). Let's say scatter(...) returns true, thus we return attenuation * ray_color(scattered, depth - 1, world). Notice now that the recursive call will immediately return due to the guard if(depth <= 0) with a retval of color(0,0,0). This will make it so the multiplication attenuation * ray_color(...) will be 0.

Isn't this incorrect as we did hit something once and shaded that point, yet we will be returning black. I tested this and sure enough all the objects become black. If I set depth = 2 I get this weird result. Notice how we have seemingly gotten one bounce (apparent from the fact that we can see specular reflections)... but the reflections from the bounce are all colored black.

I really can't wrap my head around how this is correct. I've Googled a bunch of people's implementations of the book and they all seem to do the exact same thing. How does this make sense?

EDIT: If I add a guard if (depth == 1) return attenuation; else return attenuation * ray_color(...); I do get what you would if you have depth=1, and furthermore the reflections seem to be colored more correctly with depth=2, and depth=25. Although, the result still looks odd to me. Notice how I still end up with completely black reflections, this is likely due to multiplication with zero problem initially mentioned. Perhaps there's something else that is also wrong, likely with my own code. I might have to make a new post about that as it's likely a different problem than what this question pertains to.

Color RayTracer::TraceRay(const Ray& ray, int depth, float weight)
{

    if (depth <= 0)
        return Utils::Colors::BLACK;

    HitPayload hp = FindClosestHit(ray);
    
    if (hp.t < 0.0f) {
        // We didn't hit anything, just return a sky color of some kind.
        const float alpha = 0.5f * (ray.Dir().y + 1.0f);
        const Color skyColor = (1.0f - alpha) * Utils::Colors::WHITE + alpha * Utils::Colors::SKY_COLOR;
        return skyColor * weight;
    }
    
    const Material* mat = m_Scene->GetMaterial(hp.Shape->GetMaterialIdx());

    Color colorOut{};
    Ray rayOut{};
    rayOut.SetOrigin(hp.Position + Utils::Constants::EPSILON * hp.Normal);

    // TODO: Cast shadow ray(s)
    if (mat->Shade(ray, rayOut, colorOut, hp))
        if (depth == 1)
            return colorOut;
        else
            return (weight * colorOut) * TraceRay(rayOut, depth - 1, 0.75f * weight);
    
    // Material is a black body
    return Utils::Colors::BLACK;

}

Solution

  • How does this make sense?

    You may already know this, but the first thing to keep in mind about ray tracing is that it's done backwards. We emit rays from the eye (or the camera), but in reality the rays originate at the background, reflect off the objects and end up in the eye.

    The spheres in your example do not emit light, they merely reflect/scatter background light attenuated by their albedo.

    If the light reflects too many times, it becomes "black". We "speed up" this process (and our calculations) by setting max_depth to a certain value.

    If the depth is too low, then yes you will end up with black spots. But, when the depth is high enough the light will eventually "find its way" to the background. Which in reality means the background light, through several reflections (multiplied by albedo of the reflecting objects) will find it's way to the eye/camera.

    So, you won't have black spots. This is a very non-technical (and metaphorical) explanation, but hopefully it makes sense.

    ... Although, the result still looks odd to me.

    That's because your model of the physical universe is inaccurate. In reality, the light doesn't have max_depth feature. It will reflect as many times as it can losing some of its intensity each time.