Search code examples
pythonrenderingpoint-cloudsraycastingfractals

Verification of 3D Fractal Slice-Rendering Method


Ray Casting Algorithm

MandelBulb Ray Casting Algorithm Python Example

So, if I understand correctly, the ray casting algorithm requires that an observer be located external to the 3D fractal at which point vectors are drawn from the observer toward a point on the plane normal to the vector and intersecting the origin.

It would seem to me that this would either severely limit the rendered view of the fractal or require stereoscopic 3D reconstruction of the fractal using multiple observer positions (which seems difficult to me). Additionally, no information can be gathered regarding the internal structure of the fractal.

Other Algorithms

Alternatively, Direct Volume Rendering seems intuitive enough however, computationally expensive and potentially inefficient in and of itself. Indirect Volume Rendering using an algorithm such as marching cubes might also employ a bit of a learning curve it seems.

Somewhere in the pdf of the 2nd link it talks about cut plane views in order to see slices of the fractal.

Question:

Why not use cut planes as a rendering method?

  • 1) Using a modified ray tracing algorithm, say we put the observer at point Q at the origin (0, 0, 0).

  • 2) Let us then emit rays from the origin toward the incident plane spanned by y & z point combinations that is slicing the fractal.

  • 3) Calculate the distance to the fractal surface using the algorithm in the 1st link. If the x component of computed distance is within a certain tolerance, dx of the slicing plane, then the y & z coordinates along with the x value of the slicing plane are stored as the x, y, z coordinates. These coordinates are now representative of the surface at that specific slice in x.

  • 4) Let us say that the slicing plane has one degree of freedom in the x direction. By moving the plane in its degree of freedom, we can receive yet another set of x, y, z coordinates for a given slice.

  • 5) The final result is a calculable surface generated by the point cloud created in the previous steps.

  • 6) Additionally, the degree of freedom of the slicing plane can be altered to create an another point cloud which can then be verified against the previous as a means of post-processing.

Please see the image below as a visual aid (the sphere represents the MandelBulb).

enter image description here

Below is my Python code so far, adapted from the first link. I successfully generate the plane of points and am able to get the directions from the origin to the points on the plane. There must be something fundamentally flawed in the distance estimator function because thats where everything breaks down and I get nans for the total distances

def get_plane_points(x, y_res=500, z_res=500, y_min=-10, y_max=10, z_min=-10, z_max=10):
    y = np.linspace(y_min, y_max, y_res)
    z = np.linspace(z_min, z_max, z_res)
    x, y, z = np.meshgrid(x, y, z)

    x, y, z = x.reshape(-1), y.reshape(-1) , z.reshape(-1)

    P = np.vstack((x, y, z)).T
    return P


def get_directions(P):
    v = np.array(P - 0)
    v = v/np.linalg.norm(v, axis=1)[:, np.newaxis]
    return v


@jit
def DistanceEstimator(positions, plane_loc, iterations, degree):
    m = positions.shape[0]

    x, y, z = np.zeros(m), np.zeros(m), np.zeros(m)
    x0, y0, z0 = positions[:, 0], positions[:, 1], positions[:, 2]

    dr = np.zeros(m) + 1
    r = np.zeros(m)

    theta = np.zeros(m)
    phi = np.zeros(m)
    zr = np.zeros(m)

    for _ in range(iterations):
        r = np.sqrt(x * x + y * y + z * z)

        dx = .01
        x_loc = plane_loc
        idx = (x < x_loc + dx) & (x > x_loc - dx)
        dr[idx] = np.power(r[idx], degree - 1) * degree * dr[idx] + 1.0

        theta[idx] = np.arctan2(np.sqrt(x[idx] * x[idx] + y[idx] * y[idx]), z[idx])
        phi[idx] = np.arctan2(y[idx], x[idx])

        zr[idx] = r[idx] ** degree
        theta[idx] = theta[idx] * degree
        phi[idx] = phi[idx] * degree

        x[idx] = zr[idx] * np.sin(theta[idx]) * np.cos(phi[idx]) + x0[idx]
        y[idx] = zr[idx] * np.sin(theta[idx]) * np.sin(phi[idx]) + y0[idx]
        z[idx] = zr[idx] * np.cos(theta[idx]) + z0[idx]

    return 0.5 * np.log(r) * r / dr


def trace(directions, plane_location, max_steps=50, iterations=50, degree=8):
    total_distance = np.zeros(directions.shape[0])
    keep_iterations = np.ones_like(total_distance)
    steps = np.zeros_like(total_distance)

    for _ in range(max_steps):
        positions = total_distance[:, np.newaxis] * directions
        distance = DistanceEstimator(positions, plane_location, iterations, degree)
        total_distance += distance * keep_iterations
        steps += keep_iterations

    # return 1 - (steps / max_steps) ** power
    return total_distance


def run():
    plane_location = 2
    plane_points = get_plane_points(x=plane_location)
    directions = get_directions(plane_points)
    distance = trace(directions, plane_location)

    return distance

I am eager to hear thoughts on this and what limitations/issues I may encounter. Thanks in advance for the help!


Solution

  • If I am not mistaken, it is not impossible for this algorithm to work. There is inherent potential for problems with any assumptions made about the internal structure of the MandelBulb and what positions an observer is allowed to occupy. That is, if the observer is known to initially be in a zone of convergence then the ray tracing algorithm with return nothing meaningful since the furthest distance that could be measured is 0. This is due to the fact that the current ray tracing algorithm terminates upon first contact with the surface. It is likely this could be altered, however.

    Rather than slicing the fractal with plane P, it might make more sense to prevent the termination of the ray upon first contact and instead, terminate based on a distance thats known to exist past the surface of the MandelBulb.