Search code examples
c++mpipetsc

Solving large system of linear equations using MPI+CUDA and PETSc distributed arrays


I'd like to use PETSc library to solve large distributed across processes systems of linear equations in my own program. Also I'd like to engage available GPU resources for this aim. I'm using structured mesh for discrete representation of 3D computational domain so it is preferably to use PETSc distributed arrays to avoid extra data transfers between processes.

I've configured PETSc with the following string: ./configure --prefix=/usr/local/petsc --with-mpi=1 --with-cuda=1 --with-cusp=1 --with-cusp-include=/usr/local/cuda/include/cusp/ --with-cusp-lib=

and then installed it to /usr/local/petsc location.

Now I'm trying to create DMDA object in simple test program:

#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>

#include "cuda.h"
#include "mpi.h"

/* PETSc headers */
#include "petscsys.h"
#include "petscksp.h"
#include "petscdmda.h"
#include "petscksp.h"

int main(int argc, char *argv[])
{
    MPI_Init(&argc, &argv);

    PetscInitialize(&argc, &argv, NULL, NULL);

    DM da;
    Vec x, b;                /* right hand side, exact solution */
    Mat A;                   /* linear system matrix */ 
    KSP ksp;                 /* linear solver context */
    KSPType ksptype;
    PC pc;
    PCType pctype;
    PetscErrorCode ierr;

    PetscInt Nx = 100;
    PetscInt Ny = 100;
    PetscInt Nz = 100;

    PetscInt NPx = 1;
    PetscInt NPy = 1;
    PetscInt NPz = 1;

    ierr = DMDACreate3d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DMDA_STENCIL_STAR, 
                        Nx, Ny, Nz, NPx, NPy, NPz, 1, 1, NULL, NULL, NULL, &da); CHKERRQ(ierr);

    ierr = DMSetMatType(da, MATMPIAIJ); CHKERRQ(ierr);

    /* Create distributed matrix object according to DA */
    ierr = DMCreateMatrix(da, &A); CHKERRQ(ierr);

    /* Initialize all matrix entries to zero */
    /*
    ierr = MatZeroEntries(A); CHKERRQ(ierr);
    */

    fprintf(stdout, "All was done.\n");

    PetscFinalize();
    MPI_Finalize(); 

    return 0;
}

But when I'm running it I get an error:

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: DM can not create LocalToGlobalMapping
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.7.4-1919-g73530f8  GIT Date: 2016-11-09 03:25:31 +0000
[0]PETSC ERROR: Configure options --prefix=/usr/local/petsc --with-mpi=1 --with-cuda=1 --with-cusp=1 --with-cusp-include=/usr/local/cuda/include/cusp/ --with-cusp-lib=
[0]PETSC ERROR: #1 DMGetLocalToGlobalMapping() line 986 in ~/petsc/src/dm/interface/dm.c
[0]PETSC ERROR: #2 DMCreateMatrix_DA_3d_MPIAIJ() line 1051 in ~/petsc/src/dm/impls/da/fdda.c
[0]PETSC ERROR: #3 DMCreateMatrix_DA() line 760 in ~/petsc/src/dm/impls/da/fdda.c
[0]PETSC ERROR: #4 DMCreateMatrix() line 1201 in ~/petsc/src/dm/interface/dm.c
[0]PETSC ERROR: #5 main() line 47 in petsc_test.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to [email protected]

What can be wrong with this simple code?

UPDATE: The result of uname -a command: Linux PC 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.

Open MPI implementation of MPI specifications is used.


Solution

  • Since some time one should explicitly call DMSetUp() routine after DMDACreate3d(). See here for details.