I have a 3D image that I'm trying to transform, with a known coordinate mapping. I'm trying to use map_coordinates but the scipy documentation only talks about mapping to a 1D vector leaving me rather confused.
The transformation is vectorized so I can give it a meshgrid of x y z indices and it produces a 3 x nx x ny x nz array, where the first index goes over the xyz components of the vector field and the others correspond directly to the meshgrid input dimensions
Now I just need to map the array elements of an output array to the corresponding pixels in the initial image. I want to use map_coordinates but the format of the coordinates argument is not clear to me.
Can anyone give me an example of how I would create the coordinates array in this case?
Finally figure it out so figured I would leave this here
# transposed_frame is the 3d image that needs to be transformed (shape (632, 352, 35))
# Meshgrid of the matrix coordinates
x_grid, y_grid, z_grid = np.meshgrid(range(transposed_frame.shape[0]), range(transposed_frame.shape[1]), range(transposed_frame.shape[2]), indexing = 'ij')
# inverse transformation that needs to be applied to the image (shape (3, 632, 352, 35))
# the first dimension goes over the different components of the vector field x y z
# So transform[0, i, j, k] is the x coordinate of the vector field in the point [i, j, k]
transform = vectorized_interp_field(x_grid, y_grid, z_grid)
# Transforming the image through map_coordinates is then as simple as
inverse_transformed = map_coordinates(transposed_frame, transform)
The part I didn't understand about map_coordinates was exactly what form the mapping matrix was supposed to have for higher dimensional data. It seems to work in general as follows
B = map_coordinates(A, mapping)
B[i, j, k] = A[mapping[0, i, j, k], mapping[1, i, j, k], mapping[2, i, j, k]]