You should notice quite a few files. The parallel MPI versions have names which begin with or include mpi_. The serial versions have names which begin with or include ser_. Makefiles are also included.
Note: These are example files, and as such, are intended to demonstrate the basics of how to parallelize a code using MPI. Most execute in just a second or two.
4. MPI Libraries and Compilers - What’s Available?
Recall from the LLNL MPI Implementations and Compilers section of the MPI tutorial, that LC has three different MPI libraries on its Linux clusters: MVAPICH, Open MPI and Intel MPI. There are multiple versions for each.
The default MPI library on LC’s TOSS3 Linux clusters is MVAPICH 2.
To view available MPI libraries, try the following commands:
myhello.c, myhello.f represent your source file - use your actual source file name
The -o compiler flag specifies the name for your executable
The -w compiler flag is simply being used to suppress annoying warning messages.
When you get a clean compile, proceed.
Run: Use the srun command to run your MPI executable. Be sure to use the pReserved partition with 8 tasks on two different nodes. For example:
srun -N2 -n8 -ppReserved hello
You may see a message like below while the job gets ready to run:
srun: Job is in held state, pending scheduler release
srun: job 1139098 queued and waiting for resources
Did your job run successfully? Based on the output, did it behave as expected? If not, figure out any problems and fix them before proceeding.
Run your program a few more times, but vary the number of nodes and total tasks. Observe the task output statements to confirm.
Lawrence Livermore National Laboratory
7000 East Avenue • Livermore, CA 94550 | LLNL-WEB-458451
Operated by the Lawrence Livermore National Security, LLC for the
Department of Energy's National Nuclear Security Administration
Learn about the Department of Energy's Vulnerability Disclosure Program