Search code examples
java3dshadowjava-3d

3D Shadow implementation idea


Lets assume your eye is in the surface point P1 on an object A and there is a target object B and there is a point-light source behind object B.

Question: am i right if i look to the light source and say "i am in a shadow" if i cannot see the light because of object B ?. Then i flag that point of object A as "one of the shadow points of B on A" .

enter image description here

If this is true, then can we build a "shadow geometry"(black-colored) object on the surface of A then change it constantly because of motion of light,B,A, etc... in realtime ? Lets say a sphere(A) has 1000 vertices and other sphere (B)has 1000 vertices too, so does this mean 1 milion comparations? (is shadowing, O(N^2) (time) complexity?). I am not sure about the complexity becuse the changing the P1(eye) also changes the seen point of B (between P1 and light source point). What about the second-order shadows and higher (such as lights being reflecting between two objects many times) ?

I am using java-3D now but it doesnt have shadow capabilities so i think of moving to other java-compatible libraries.

Thanks.

Edit: i need to disable the "camera" when moving the camera to build that shadow. How can i do this? Does this decrease the performance badly?

New idea: java3D has built-in collision detection. I will create lines(invisible) from light to target polygon-vertex then check for a collision from another object. If collision occurs, add that vertex corrd. to the shadow list but this would work only for point-lights :( .

Anyone who supplys with a real shade library for java3d, will be much helpful.

Very small sample Geomlib shadow/raytracing in java3D would be the best Ray-tracing example maybe?

I know this is a little hard but could have been tried by at least a hundred people.

Thanks.


Solution

  • Your approach can be summarised like this:

    foreach (point p to be shaded) {
        foreach (light) {
            if (light is visible from p)
                // p is lit by that light
            else
                // p is in shadow
        }
    }
    

    The funny fact is that's how real-time shadows are done today on the GPU.

    However it's not trivial for this to work efficiently. Rendering the scene is a streamlined process, triangle-by-triangle. It would be very cumbersome if for every single point (pixel, fragment) in every single triangle you'd need to consider all other triangles in other to check for ray intersection.

    So how to do that efficiently? Answer: Reverse the process.

    There's a lot fewer lights than pixels on the scene, usually. Let's take advantage of this fact and do some preprocessing:

    // preprocess
    foreach (light) {
        // find all pixels p on the scene reachable from the light
    }
    // then render the whole scene...
    foreach (point p to be shaded) {
        foreach (light) {
            // simply look up into what was calculated before...
            if (p is visible by the light)
                // p is lit
            else
                // p is in shadow
        }
     }
    

    That seems a lot faster... But two problems remain:

    1. how to find all pixels visible by the light?
    2. how to make them accessible quickly for lookup during rendering?

    There's the tricky part:

    • In order to find all points visible by a light, place a camera there and render the whole scene! Depth test will reject the invisible points.
    • To make this result accessible later, save it as a texture and use that texture for lookup during the actual rendering stage.

    This technique is called Shadow Mapping, and the texture with pixels visible from a light is called a Shadow Map. For a more detailed explanation, see for example the Wikipedia article.