I have a triangular tessellation like the one shown in the figure.
Given N
number of triangles in the tessellation, I have a N X 3 X 3
array which stores (x, y, z)
coordinates of all three vertices of each triangle. My goal is to find for each triangle the neighbouring triangle sharing the same edge. The is an intricate part is the whole setup that I do not repeat the neighbour count. That is if triangle j
was already counted as a neighbour of triangle i
, then triangle i
should not be again counted as neighbour of triangle j
. This way, I would like to have a map storing list of neighbours for each index triangle. If I start with a triangle in index i
, then index i
will have three neighbours, and all others will have two or less. As an illustration suppose I have an array which stores vertices of the triangle:
import numpy as np
vertices = np.array([[[2.0, 1.0, 3.0],[3.0, 1.0, 2.0],[1.2, 2.5, -2.0]],
[[3.0, 1.0, 2.0],[1.0, 2.0, 3.0],[1.2, -2.5, -2.0]],
[[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]],
[[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[2.2, 2.0, 1.0]],
[[1.0, 2.0, 3.0],[2.2, 2.0, 1.0],[4.0, 1.0, 0.0]],
[[2.0, 1.0, 3.0],[2.2, 2.0, 1.0],[-4.0, 1.0, 0.0]]])
Suppose I start my count from vertex index 2
, i.e. the one with the vertices [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]]
, then, I would like my output to be something like:
neighbour = [[], [], [0, 1, 3], [4, 5], [], []].
Update: Following the answer from @Ajax1234, I think a good way of storing the output is just like how @Ajax1234 has demonstrated. However, there is ambiguity in that output, in a sense that it is not possible to know whose neighbour is which. Although the example array are not good, I have an actual vertices from icosahedron, then if I start with a given triangle, I am guaranteed to have 3 neighbours for the first one, and two neighbours for rest (until all the triangle counts deplete). In this regard, suppose I have a following array:
vertices1 = [[[2, 1, 3], [3, 1, 2], [1, 2, -2]],
[[3, 1, 2], [1, 2, 3], [1, -2, 2]],
[[1, 2, 3], [2, 1, 3], [3, 1, 2]],
[[1, 2, 3], [2, 1, 3], [2, 2, 1]],
[[1, 2, 3], [2, 2, 1], [4, 1, 0]],
[[2, 1, 3], [2, 2, 1], [-4, 1, 0]],
[[3, 1, 3], [2, 2, 1], [-4, 1, 0]],
[[8, 1, 2], [1, 2, 3], [1, -2, 2]]]
The BFS algorithm shown in the answer below by @Ajax1234 gives the output of
[0, 1, 3, 7, 4, 5, 6]
while if I just swap the position of the last element such that
vertices2 = [[[2, 1, 3], [3, 1, 2], [1, 2, -2]],
[[3, 1, 2], [1, 2, 3], [1, -2, 2]],
[[1, 2, 3], [2, 1, 3], [3, 1, 2]],
[[1, 2, 3], [2, 1, 3], [2, 2, 1]],
[[1, 2, 3], [2, 2, 1], [4, 1, 0]],
[[8, 1, 2], [1, 2, 3], [1, -2, 2]],
[[2, 1, 3], [2, 2, 1], [-4, 1, 0]],
[[3, 1, 3], [2, 2, 1], [-4, 1, 0]]]
which gives an output of
[0, 1, 3, 4, 5, 6, 7].
This is kind of ambiguous, as the positions in the gird have not been changed at all, they were just swapped. Therefore, I would like to have a consistent way the search is carried. For example, first time search of neighbours at index 2 gives [0, 1, 3]
for both vertices1
and vertices2
, now I would like the search to be at index 0, which finds nothing and thus go to next element 1 should find index 7
for vertices1
and index 5
for vertices2
. Thus the current output should be [0, 1, 3, 7]
, [0, 1, 3, 5]
for vertices1
and vertices2
respectively. Next we go to index 3
, and so on. After we have exhausted all the search, the final output for the first should be
[0, 1, 3, 7, 4, 5, 6]
and that for the second should
[0, 1, 3, 5, 4, 6, 7].
What would be the efficient way to achieve this?
If you are willing to use the networkx
library, you can take advantage of its fast bfs implementation. I know, adding another dependency is annoying, but the performance gain seems huge, see below.
import numpy as np
from scipy import spatial
import networkx
vertices = np.array([[[2.0, 1.0, 3.0],[3.0, 1.0, 2.0],[1.2, 2.5, -2.0]],
[[3.0, 1.0, 2.0],[1.0, 2.0, 3.0],[1.2, -2.5, -2.0]],
[[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]],
[[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[2.2, 2.0, 1.0]],
[[1.0, 2.0, 3.0],[2.2, 2.0, 1.0],[4.0, 1.0, 0.0]],
[[2.0, 1.0, 3.0],[2.2, 2.0, 1.0],[-4.0, 1.0, 0.0]]])
vertices1 = np.array([[[2, 1, 3], [3, 1, 2], [1, 2, -2]],
[[3, 1, 2], [1, 2, 3], [1, -2, 2]],
[[1, 2, 3], [2, 1, 3], [3, 1, 2]],
[[1, 2, 3], [2, 1, 3], [2, 2, 1]],
[[1, 2, 3], [2, 2, 1], [4, 1, 0]],
[[2, 1, 3], [2, 2, 1], [-4, 1, 0]],
[[3, 1, 3], [2, 2, 1], [-4, 1, 0]],
[[8, 1, 2], [1, 2, 3], [1, -2, 2]]])
def make(N=3000):
"""create a N random points and triangulate"""
points= np.random.uniform(-10, 10, (N, 3))
tri = spatial.Delaunay(points[:, :2])
return points[tri.simplices]
def bfs_tree(triangles, root=0, return_short=True):
"""convert triangle list to graph with vertices = triangles,
edges = pairs of triangles with shared edge and compute bfs tree
rooted at triangle number root"""
# use the old view as void trick to merge triplets, so they can
# for example be easily compared
tr_as_v = triangles.view(f'V{3*triangles.dtype.itemsize}').reshape(
triangles.shape[:-1])
# for each triangle write out its edges, this involves doubling the
# data becaues each vertex occurs twice
tr2 = np.concatenate([tr_as_v, tr_as_v], axis=1).reshape(-1, 3, 2)
# sort vertices within edges ...
tr2.sort(axis=2)
# ... and glue them together
tr2 = tr2.view(f'V{6*triangles.dtype.itemsize}').reshape(
triangles.shape[:-1])
# to find shared edges, sort them ...
idx = tr2.ravel().argsort()
tr2s = tr2.ravel()[idx]
# ... and then compare consecutive ones
pairs, = np.where(tr2s[:-1] == tr2s[1:])
assert np.all(np.diff(pairs) >= 2)
# these are the edges of the graph, the vertices are implicitly
# named 0, 1, 2, ...
edges = np.concatenate([idx[pairs,None]//3, idx[pairs+1,None]//3], axis=1)
# construct graph ...
G = networkx.Graph(edges.tolist())
# ... and let networkx do its magic
res = networkx.bfs_tree(G, root)
if return_short:
# sort by distance from root and then by actual path
sp = networkx.single_source_shortest_path(res, root)
sp = [sp[i] for i in range(len(sp))]
sp = [(len(p), p) for p in sp]
res = sorted(range(len(res.nodes)), key=sp.__getitem__)
return res
Demo:
# OP's second example:
>>> bfs_tree(vertices1, 2)
[2, 0, 1, 3, 7, 4, 5, 6]
>>>
# large random example
>>> random_data = make()
>>> random_data.shape
(5981, 3, 3)
>>> bfs = bfs_tree(random_data)
# returns instantly