This is a follow-up question from MPI_Gather 2D array. Here is the situation:
id = 0 has this submatrix
|16.000000| |11.000000| |12.000000| |15.000000|
|6.000000| |1.000000| |2.000000| |5.000000|
|8.000000| |3.000000| |4.000000| |7.000000|
|14.000000| |9.000000| |10.000000| |13.000000|
-----------------------
id = 1 has this submatrix
|12.000000| |15.000000| |16.000000| |11.000000|
|2.000000| |5.000000| |6.000000| |1.000000|
|4.000000| |7.000000| |8.000000| |3.000000|
|10.000000| |13.000000| |14.000000| |9.000000|
-----------------------
id = 2 has this submatrix
|8.000000| |3.000000| |4.000000| |7.000000|
|14.000000| |9.000000| |10.000000| |13.000000|
|16.000000| |11.000000| |12.000000| |15.000000|
|6.000000| |1.000000| |2.000000| |5.000000|
-----------------------
id = 3 has this submatrix
|4.000000| |7.000000| |8.000000| |3.000000|
|10.000000| |13.000000| |14.000000| |9.000000|
|12.000000| |15.000000| |16.000000| |11.000000|
|2.000000| |5.000000| |6.000000| |1.000000|
-----------------------
The global matrix:
|1.000000| |2.000000| |5.000000| |6.000000|
|3.000000| |4.000000| |7.000000| |8.000000|
|11.000000| |12.000000| |15.000000| |16.000000|
|-3.000000| |-3.000000| |-3.000000| |-3.000000|
What I am trying to do is gather only the central elements (the ones not in the borders) in the global grid, so the global grid should like this:
|1.000000| |2.000000| |5.000000| |6.000000|
|3.000000| |4.000000| |7.000000| |8.000000|
|9.000000| |10.000000| |13.000000| |14.000000|
|11.000000| |12.000000| |15.000000| |16.000000|
and not like the one I am getting. This is the code I have:
float **gridPtr;
float **global_grid;
lengthSubN = N/pSqrt; // N is the dim of global gird and pSqrt the sqrt of the number of processes
MPI_Type_contiguous(lengthSubN, MPI_FLOAT, &rowType);
MPI_Type_commit(&rowType);
if(id == 0) {
MPI_Gather(&gridPtr[1][1], 1, rowType, global_grid[0], 1, rowType, 0, MPI_COMM_WORLD);
MPI_Gather(&gridPtr[2][1], 1, rowType, global_grid[1], 1, rowType, 0, MPI_COMM_WORLD);
} else {
MPI_Gather(&gridPtr[1][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
MPI_Gather(&gridPtr[2][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
}
...
float** allocate2D(float** A, const int N, const int M) {
int i;
float *t0;
A = malloc(M * sizeof (float*)); /* Allocating pointers */
if(A == NULL)
printf("MALLOC FAILED in A\n");
t0 = malloc(N * M * sizeof (float)); /* Allocating data */
if(t0 == NULL)
printf("MALLOC FAILED in t0\n");
for (i = 0; i < M; i++)
A[i] = t0 + i * (N);
return A;
}
EDIT:
Here is my attempt without MPI_Gather()
, but with subarray:
MPI_Datatype mysubarray;
int starts[2] = {1, 1};
int subsizes[2] = {lengthSubN, lengthSubN};
int bigsizes[2] = {N_glob, M_glob};
MPI_Type_create_subarray(2, bigsizes, subsizes, starts,
MPI_ORDER_C, MPI_FLOAT, &mysubarray);
MPI_Type_commit(&mysubarray);
MPI_Isend(&(gridPtr[0][0]), 1, mysubarray, 0, 3, MPI_COMM_WORLD, &req[0]);
MPI_Type_free(&mysubarray);
MPI_Barrier(MPI_COMM_WORLD);
if(id == 0) {
for(i = 0; i < p; ++i) {
MPI_Irecv(&(global_grid[i][0]), lengthSubN * lengthSubN, MPI_FLOAT, i, 3, MPI_COMM_WORLD, &req[0]);
}
}
if(id == 0)
print(global_grid, N_glob, N_glob);
but the result is:
|1.000000| |2.000000| |3.000000| |4.000000|
|5.000000| |6.000000| |7.000000| |8.000000|
|9.000000| |10.000000| |11.000000| |12.000000|
|13.000000| |14.000000| |15.000000| |16.000000|
which is not exactly what I want. I have to find a way to say to recv that it should place the data in another fashion. So, if I do:
MPI_Irecv(&(global_grid[0][0]), 1, mysubarray, 0, 3, MPI_COMM_WORLD, &req[0]);
then I would get:
|-3.000000| |-3.000000| |-3.000000| |-3.000000|
|-3.000000| |1.000000| |2.000000| |-3.000000|
|-3.000000| |3.000000| |4.000000| |-3.000000|
|-3.000000| |-3.000000| |-3.000000| |-3.000000|
I cannot give a full solution, but I will explain why your original example using MPI_Gather
does not work as expected.
With lengthSubN=2
you defined a new datatype of 2 floats which are stored adjacent in memory at this line:
MPI_Type_contiguous(lengthSubN, MPI_FLOAT, &rowType);
Now, let's take a look at the first MPI_Gather
call which is:
if(id == 0) {
MPI_Gather(&gridPtr[1][1], 1, rowType, global_grid[0], 1, rowType, 0, MPI_COMM_WORLD);
} else {
MPI_Gather(&gridPtr[1][1], 1, rowType, NULL, 0, rowType, 0, MPI_COMM_WORLD);
}
It takes 1 rowType
which is 2 adjacent float starting at element gridPtr[1][1]
from each rank. These are the values:
id 0: 1.0 2.0
id 1: 5.0 6.0
id 2: 9.0 10.0
id 3: 13.0 14.0
and places them adjacent in the receive buffer pointed to by global_grid[0]
. This pointer actually points to the start of the first row, so that the memory is filled with:
1.0 2.0 5.0 6.0 9.0 10.0 13.0 14.0
But, global_grid
has only 4 columns per row, so that the last 4 value wrap to the second row pointed to by global_grid[1]
(*). This may even by undefined behaviour. Thus, after this MPI_Gather
the contents of global_grid
is:
1.0 2.0 5.0 6.0
9.0 10.0 13.0 14.0
-3.0 -3.0 -3.0 -3.0
-3.0 -3.0 -3.0 -3.0
The second MPI_Gather
works the same way and starts writing at the second row of global_grid
:
3.0 4.0 7.0 8.0 11.0 12.0 15.0 16.0
It thus overwrites some values above and the result is as observed:
1.0 2.0 5.0 6.0
3.0 4.0 7.0 8.0
11.0 12.0 15.0 16.0
-3.0 -3.0 -3.0 -3.0
(*) allocate2d
actually allocates continous memory for the 2 dimensional data buffer.