I have a function D(x,y,z) in which I want to evaluate (via interpolation) planes within the z, y, and z axis. i.e. I want the output of my interpolations to be a 2D plane holding one of the values fixed, D(x,y,0) for example.
I have created an interpolating function via scipy using some given values of D, D_values, for my input values of x,y,z.
from scipy.interpolate import RegularGridInterpolator as rgi
D_interp=rgi((x_positions,y_positions,z_positions), D_values)
Now I can get any point interpolated by just calling
D_interpolated=D_interp(xi,yi,zi)
I understand how I can evaluate individual points from this, but how would I interpolate a plane? For example, in my case, D_values is of size 345x155x303 and I want to interpolate 345x155 planes all along the z axis corresponding to the x and y input values, at z=0, z=1, z=2, etc.
My attempt at a solution is to feed in the x_positions, y_positions vectors individually into D_interp keeping z fixed, but this just gets me a set of D values evaluated at specific positions, rather than organized into a grid like the planar output I'd actually like. Syntax doesn't allow me to call something like
Plane=D_interp(x_positions,y_positions,0)
so I was not quite sure about the syntax of calling this function to have planar output.
any help appreciated
Thanks,
The typical approach to combining multiple arrays with different sizes corresponding to different dimensions in numpy and scipy is to use broadcasting. Here is a sample problem to illustrate the application:
x_positions = np.linspace(0, 10, 101)
y_positions = np.linspace(-10, 10, 201)
z_positions = np.linspace(-5, 5, 101)
D_values = np.sin(2 * np.pi * x_positions[:, None, None] * y_positions[:, None] / 100) + np.cos(2 * np.pi * y_positions[:, None] * z_positions / 50)
This is similar to the D_values
array you describe in your problem, where each of the bins in the different directions correspond to the *_positions
arrays. I used broadcasting to turn x_positions
into a (101, 1, 1)
-shaped array, y_positions
into a (201, 1)
-shaped array and left z_positions
as a (101,)
-shaped array. The result is that D_values
is a (101, 201, 101)
-shaped array. The reshaped versions of the input arrays did not copy any data!
You can call your interpolator using the same idea that I used to create a sample D_values
.
D_interp = rgi((x_positions, y_positions, z_positions), D_values)
Let's say you want to fix z = 0
. All that scipy requires is that the inputs broadcast together. Scalars broadcast with everything, so you can just do
x_interp = np.linspace(0.05, 0.95, 200)
y_interp = np.linspace(-9.95, 9.95, 400)
z_interp = 0
D_xy_interp = D_interp((x_interp[:, None], y_interp, z_interp))
The advantage to doing this over creating a mesh is that you don't have to copy any data around and create extra 200x400 input arrays. Another advantage is that you have better control over the output. In this case, D_xy_interp
has shape (len(x_interp), len(y_interp))
. That's because in general, the shape of the output will be the broadcasted shape of the input. You can see that when we created D_values
, and you can see it here. Since 0
is a scalar, it does not contribute to the shape. But I could also make a (400, 200)
shaped array instead:
D_interp((x_interp, y_interp[:, None], z_interp))
Or even a (100, 4, 100, 2)
shaped array:
D_interp((x_interp.reshape(-1, 2), y_interp.reshape(-1, 4, 1, 1), z_interp))
In either case, let's verify that the interpolator did it's job. We can compare the interpolated values to a much finer sampling of the function that created D_values
:
D_xy_values = np.sin(2 * np.pi * x_interp[:, None] * y_interp / 100) + np.cos(2 * np.pi * y_interp * z_interp / 50)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(x_interp[:, None], y_interp, D_xy_interp, label='Interp')
ax.plot_surface(x_interp[:, None], y_interp, D_xy_values, label='Values')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
At the moment it doesn't look like you can add legends to 3D plots:
The two plots are virtually indistinguishable. With the default color cycler, you will see the surface chance from blue to orange as you rotate it. Here is an analytical verification:
>>> np.sqrt(np.mean((D_xy_values - D_xy_interp)**2))
4.707625623185639e-05