struct Face
{
// Matrixd is 1D representation of 2D matrix
std::array < Matrixd<5,5>, 2 > M;
};
std::vector <Face> face;
I have a distributed for-loop among nodes. After all nodes finish working on their elements I would like to transfer corresponding elements among nodes. But AFAIK to use MPI_Allgatherv
the data should be contiguous. First of all, I switched to 1D representation of 2D matrices (I was using [][] notation before). Now I want to make face.M
to be contiguous. I am thinking to copy all elements of say, M[0] to an std::array
an transfer that among nodes. Is this way efficient? To give an idea of number of data I work with, if I have 20k cells, at maximum I have 20k*3=60k faces. I might have a million of cells, too.
A true 2D array in C/C++, e.g. int foo[5][5]
is already contiguous in memory; it's basically just syntactic sugar for int foo[25]
where accesses like foo[3][2]
implicitly look up foo[3*5 + 2]
in the flat equivalent. Switching to a Matrixd
defined in a single dimension won't change the actual memory layout.
std::array
is (mostly) just a wrapper for C-style arrays as well; with no virtual members, and compile time defined size with no internal pointers (just the raw array), it's also going to be contiguous. I strongly suspect if you checked the assembly produced, you'd find that the array
of Matrixd
s is already contiguous.
In short, I don't think you need to change anything; you're already contiguous, so MPI should be fine.