Search code examples
numpyvectorization

Create array of lengths between points. pointed indexes by another array


I have a matrix of vertex coordinates (3 by Number of vertices) and an array of edges (2 by number of edges) which simply stores the index of the vertices either side of the edge. [[0,1] [1,2] [2.3]] for example, these numbers being the index of the point in the vertex matrix

I want to create an array of edge lengths, presumably using linalg.norm but I'm not sure how to do this in a vectorised way. I know I could loop through the edges finding the norm of each coordinates but how would I do this in one? many thanks

I can loop through it easily enough but fail miserably to grasp further


Solution

  • Here is an example. I will first generate 10 points in 3 dimension and then 4 edges.

    import numpy as np
    coordinates = np.random.randint(0,2,(10,3))
    edges = np.random.randint(0,9,(4,2))
    

    coordinates is now a 10x3 matrix where each row contains the coordinate x,y,z of the point. Using randint between 0 and 2, the coordinates will be either 0 or 1 (easier for you to check the result). edges is a 4x2 matrix where each row contain the index of one side of the edge.

    If we now compute

    edges_vectors = (coordinates[edges[:,0]] - coordinates[edges[:,1]])
    

    we have a 4x3 matrix where each row correspond to an edge and contains the difference between these 2 points. All we have to do is now compute the norm of the rows of edges_vectors using numpy's builtin norm:

    edge_norms = np.linalg.norm(edges_vectors,axis=1)
    

    It is important to specify axis=1 as each row is a vector.