Workshops differ in how this is done. The instructor will go over this beforehand.
In your home directory, create a subdirectory for the MPI test codes and cd to it.
mkdir ~/mpi cd ~/mpi
Copy either the Fortran or the C version of the parallel MPI exercise files to your mpi subdirectory:
cp /usr/global/docs/training/blaise/mpi/C/* ~/mpi
cp /usr/global/docs/training/blaise/mpi/Fortran/* ~/mpi
Some of the example codes have serial versions for comparison. Use the appropriate command below to copy those files to your mpi subdirectory also.
cp /usr/global/docs/training/blaise/mpi/Serial/C/* ~/mpi
cp /usr/global/docs/training/blaise/mpi/Serial/Fortran/* ~/mpi
You should notice quite a few files. The parallel MPI versions have names which begin with or include
mpi_. The serial versions have names which begin with or include
ser_. Makefiles are also included.
Note: These are example files, and as such, are intended to demonstrate the basics of how to parallelize a code using MPI. Most execute in just a second or two.
|C Files||Fortran Files||Description|
|mpi_helloBsend.c||mpi_helloBsend.f||Hello World modified to include blocking send/receive routines|
|mpi_helloNBsend.c||mpi_helloNBsend.f||Hello World modified to include nonblocking send/receive routines|
|pi Calculation - point-to-point communications|
|pi Calculation - collective communications|
|Concurrent Wave Equation|
|2D Heat Equation|
|mpi_latency.c||mpi_latency.f||Round Trip Latency Timing Test|
|Bandwidth Timing Tests|
|Prime Number Generation|
|From the tutorial...|
Contiguous derived datatype
Vector derived datatype
Indexed derived datatype
Structure derived datatype
|batchscript.c/||batchscript.f||Batch job scripts|
|Programs with bugs|
Recall from the LLNL MPI Implementations and Compilers section of the MPI tutorial, that LC has three different MPI libraries on its Linux clusters: MVAPICH, Open MPI and Intel MPI. There are multiple versions for each.
The default MPI library on LC’s TOSS3 Linux clusters is MVAPICH 2.
To view available MPI libraries, try the following commands:
module avail mvapich module avail openmpi module avail impi
Additionally, there are multiple compilers (and versions). Try the following commands to view them:
module avail intel module avail gcc module avail pgi module avail clang
Moral of the Story: if you want to use a specific version of an MPI library and/or compiler other than the default, you will need to load the selected package.
Create: Using your favorite text editor (vi/vim, emacs, nedit, gedit, nano…) open a new file - call it whatever you’d like. It should do the following:
Compile: Use a C or Fortran MPI compiler command. For example:
mpicc -w -o hello myhello.c mpif77 -w -o hello myhello.f mpif90 -w -o hello myhello.f
myhello.f represent your source file - use your actual source file name
-o compiler flag specifies the name for your executable
-w compiler flag is simply being used to suppress annoying warning messages.
When you get a clean compile, proceed.
Run: Use the
srun command to run your MPI executable. Be sure to use the
pReserved partition with 8 tasks on two different nodes. For example:
srun -N2 -n8 -ppReserved hello
You may see a message like below while the job gets ready to run:
srun: Job is in held state, pending scheduler release srun: job 1139098 queued and waiting for resources
Did your job run successfully? Based on the output, did it behave as expected? If not, figure out any problems and fix them before proceeding.
Run your program a few more times, but vary the number of nodes and total tasks. Observe the task output statements to confirm.