I am trying to install a spack package in a cluster, and if I use
spack install namd
Spack download and install its own MPI interface. Since it is a cluster, I want to take advantage of the native MPI interface, that is personalized by the admin to work fast in the particular computer. How can tell to spack to use the already installed MPI interface (i.e. openmpi or mpich)?.
BTW, I am very new with spack. Thanks!
OK, I already figure it out by reading this page from spack web. I need to create a config file with spack config edit packages
and add something like
packages:
openmpi:
buildable: False
modules:
[email protected]%[email protected] arch=linux-x86_64-centos7: /opt/modules/mpi/gcc/8/openmpi/3.1.3
all:
compiler: [[email protected]]
providers:
mpi: [[email protected]]
Actually, I need to load first /opt/modules/compiladores/gcc/8
to make the /opt/modules/mpi/gcc/8/openmpi/3.1.3
visible, so I need something like
packages:
openmpi:
buildable: False
modules:
[email protected]%[email protected] arch=linux-x86_64-centos7:
- /opt/modules/compiladores/gcc/8
- /opt/modules/mpi/gcc/8/openmpi/3.1.3
But this does not work since it needs multiple external modules and it is not possible to specify more than one (see here).
Also, spack doesn't use the external module, it creates an internal one by coping and parsing. It will ignore module dependencies or environment variables from the original external module that might be important. modules.yaml
needs also to be properly configured to set or prepend this environment variables.