LLNL MPI Implementations and Compilers

Multiple Implementations

Although the MPI programming interface has been standardized, actual library implementations will differ.

For example, just a few considerations of many:

MPI library implementations on LC systems vary, as do the compilers they are built for. These are summarized in the table below:

MPI Library Where? Compilers
MVAPICH Linux clusters GNU, Intel, Clang
Open MPI Linux clusters GNU, Intel, Clang
Intel MPI Linux clusters Intel, GNU
IBM Spectrum MPI Coral Early Access and Sierra clusters IBM, GNU, PGI, Clang

Each MPI library is briefly discussed in the following sections, including links to additional detailed information.

Selecting Your MPI Library and Compiler

LC provides a default MPI library for each cluster.

LC also provides default compilers for each cluster.

Typically, there are multiple versions of MPI libraries and compilers on each cluster.

Modules are used to select a specific MPI library or compiler: More info HERE.

For example, using modules:

### List currently loaded modules
% module list

Currently Loaded Modules:
  1) intel-classic/2021.6.0-magic   3) jobutils/1.0       5) StdEnv (S)
  2) mvapich2/2.3.7                 4) texlive/20220321

module avail mvapich

----- /usr/tce/modulefiles/MPI/intel-classic/2021.6.0-magic/mvapich2/2.3.7 -----
   mvapich2-tce/2.3.7

---------- /usr/tce/modulefiles/Compiler/intel-classic/2021.6.0-magic ----------
   mvapich2/2.3.7 (L)

module avail openmpi

---------- /usr/tce/modulefiles/Compiler/intel-classic/2021.6.0-magic ----------
   openmpi/4.1.2



### Load a different MPI module
% mmodule load openmpi

Lmod is automatically replacing "mvapich2/2.3.7" with "openmpi/4.1.2".

module list

Currently Loaded Modules:
  1) intel-classic/2021.6.0-magic   3) texlive/20220321       5) openmpi/4.1.2
  2) jobutils/1.0                   4) StdEnv           (S)

  Where:
   S:  Module is Sticky, requires --force to unload or purge

MVAPICH

General Info

MVAPICH MPI is developed and supported by the Network-Based Computing Lab at Ohio State University.

Available on all of LC’s Linux clusters.

MVAPICH2

To see what versions are available, and/or to select an alternate version, use Modules commands. For example:

module avail mvapich         (list available modules)
module load mvapich2/2.3     (use the module of interest)

Compiling

See the MPI Build Scripts table below.

Running

MPI executables are launched using the SLURM srun command with the appropriate options. For example, to launch an 8-process MPI job split across two different nodes in the pdebug pool:

srun -N2 -n8 -ppdebug a.out

The srun command is discussed in detail in the Running Jobs section of the Linux Clusters Overview tutorial.

Documentation

Open MPI

General Information

Open MPI is a thread-safe, open source MPI implementation developed and supported by a consortium of academic, research, and industry partners.

Available on all LC Linux clusters. However, you’ll need to load the desired module first. For example:

module avail                 (list available modules)
module load openmpi/3.0.1    (use the module of interest)

This ensures that LC’s MPI wrapper scripts point to the desired version of Open MPI.

Compiling

See the MPI Build Scripts table below.

Running

Be sure to load the same Open MPI module that you used to build your executable. If you are running a batch job, you will need to load the module in your batch script.

Launching an Open MPI job can be done using the following commands. For example, to run a 48 process MPI job:

mpirun -np 48 a.out
mpiexec -np 48 a.out
srun -n 48 a.out

Documentation

Open MPI home page: http://www.open-mpi.org/

Intel MPI

CORAL Early Access and Sierra Clusters:

MPI Build Scripts

LC developed MPI compiler wrapper scripts are used to compile MPI programs on all LC systems.

Automatically perform some error checks, include the appropriate MPI #include files, link to the necessary MPI libraries, and pass options to the underlying compiler.

The table below lists the primary MPI compiler wrapper scripts for LC’s Linux clusters. For CORAL EA / Sierra systems, see the links provided above.

MPI Build Scripts - Linux Clusters
Implementation Language Script Name Underlying Compiler
MVAPCH2 C mpicc C compiler for loaded compiler package
C++ mpicxx
mpic++
C++ compiler for loaded compiler package
Fortran mpif77 Fortran77 compiler for loaded compiler package. Points to mpifort.
mpif90 Fortran90 compiler for loaded compiler package. Points to mpifort.
mpifort Fortran 77/90 compiler for loaded compiler package.
Open MPI C mpicc C compiler for loaded compiler package
C++ mpiCC
mpic++
mpicxx
C++ compiler for loaded compiler package
Fortran mpif77 Fortran77 compiler for loaded compiler package. Points to mpifort.
mpif90 Fortran90 compiler for loaded compiler package. Points to mpifort.
mpifort Fortran 77/90 compiler for loaded compiler package.

For additional information:

Level of Thread Support

MPI libraries vary in their level of thread support:

Consult the MPI_Init_thread() man page for details.

A simple C language example for determining thread level support is shown below.

#include "mpi.h"
#include <stdio.h>

int main( int argc, char *argv[] )
{
    int provided, claimed;

/*** Select one of the following
    MPI_Init_thread( 0, 0, MPI_THREAD_SINGLE, &provided );
    MPI_Init_thread( 0, 0, MPI_THREAD_FUNNELED, &provided );
    MPI_Init_thread( 0, 0, MPI_THREAD_SERIALIZED, &provided );
    MPI_Init_thread( 0, 0, MPI_THREAD_MULTIPLE, &provided );
***/

    MPI_Init_thread(0, 0, MPI_THREAD_MULTIPLE, &provided );
    MPI_Query_thread( &claimed );
        printf( "Query thread level= %d  Init_thread level= %d\n", claimed, provided );

    MPI_Finalize();
}
# Sample output:
Query thread level= 3  Init_thread level= 3