Exercise 1

Overview

1. Login to the workshop machine

Workshops differ in how this is done. The instructor will go over this beforehand.

2. Copy the example files

In your home directory, create a subdirectory for the MPI test codes and cd to it.

mkdir ~/mpi
cd  ~/mpi

Copy either the Fortran or the C version of the parallel MPI exercise files to your mpi subdirectory:

C:

cp  /usr/global/docs/training/blaise/mpi/C/*   ~/mpi

Fortran:

cp  /usr/global/docs/training/blaise/mpi/Fortran/*   ~/mpi

Some of the example codes have serial versions for comparison. Use the appropriate command below to copy those files to your mpi subdirectory also.

C:

cp  /usr/global/docs/training/blaise/mpi/Serial/C/*   ~/mpi

Fortran:

cp  /usr/global/docs/training/blaise/mpi/Serial/Fortran/*   ~/mpi

3. List the contents of your MPI subdirectory

You should notice quite a few files. The parallel MPI versions have names which begin with or include mpi_. The serial versions have names which begin with or include ser_. Makefiles are also included.

Note: These are example files, and as such, are intended to demonstrate the basics of how to parallelize a code using MPI. Most execute in just a second or two.

C FilesFortran FilesDescription
mpi_hello.cmpi_hello.fHello World
mpi_helloBsend.cmpi_helloBsend.fHello World modified to include blocking send/receive routines
mpi_helloNBsend.cmpi_helloNBsend.fHello World modified to include nonblocking send/receive routines
mpi_array.c
ser_array.c
mpi_array.f
ser_array.f
Array Decomposition
mpi_mm.c
ser_mm.c
mpi_mm.f
ser_mm.f
Matrix Multiply
mpi_pi_send.c
ser_pi_calc.c
mpi_pi_send.f
ser_pi_calc.f
pi Calculation - point-to-point communications
mpi_pi_reduce.c
ser_pi_calc.c
mpi_pi_reduce.f
ser_pi_calc.f
pi Calculation - collective communications
mpi_wave.c
draw_wave.c
ser_wave.c
mpi_wave.f
mpi_wave.h
draw_wavef.c
ser_wave.f
Concurrent Wave Equation
mpi_heat2D.c
draw_heat.c
ser_heat2D.c
mpi_heat2D.f
mpi_heat2D.h
draw_heatf.c
ser_heat2D.f
2D Heat Equation
mpi_latency.cmpi_latency.fRound Trip Latency Timing Test
mpi_bandwidth.c
mpi_bandwidth_nonblock.c
mpi_bandwidth.f
mpi_bandwidth_nonblock.f
Bandwidth Timing Tests
mpi_prime.c
ser_prime.c
mpi_prime.f
ser_prime.f
Prime Number Generation
mpi_ping.c
mpi_ringtopo.c
mpi_scatter.c
mpi_contig.c
mpi_vector.c
mpi_struct.c
mpi_group.c
mpi_cartesian.c
mpi_ping.f
mpi_ringtopo.f
mpi_scatter.f
mpi_contig.f
mpi_vector.f
mpi_indexed.f
mpi_struct.f
mpi_group.f
mpi_cartesian.f
From the tutorial...
Non-blocking send-receive
Collective communications
Contiguous derived datatype
Vector derived datatype
Indexed derived datatype
Structure derived datatype
Groups/Communicators
CartesianVirtual Topology
Makefile.MPI.c
Makefile.Ser.c
Makefile.MPI.f
Makefile.Ser.f
Makefiles
batchscript.c/ batchscript.fBatch job scripts

mpi_bug1.c
mpi_bug2.c
mpi_bug3.c
mpi_bug4.c
mpi_bug5.c
mpi_bug6.c
mpi_bug7.c

mpi_bug1.f
mpi_bug2.f
mpi_bug3.f
mpi_bug4.f
mpi_bug5.f
mpi_bug6.f
mpi_bug7.f
Programs with bugs

4. MPI Libraries and Compilers - What’s Available?

Recall from the LLNL MPI Implementations and Compilers section of the MPI tutorial, that LC has three different MPI libraries on its Linux clusters: MVAPICH, Open MPI and Intel MPI. There are multiple versions for each.

The default MPI library on LC’s TOSS3 Linux clusters is MVAPICH 2.

To view available MPI libraries, try the following commands:

module avail mvapich
module avail openmpi
module avail impi

Additionally, there are multiple compilers (and versions). Try the following commands to view them:

module avail intel
module avail gcc
module avail pgi
module avail clang

Moral of the Story: if you want to use a specific version of an MPI library and/or compiler other than the default, you will need to load the selected package.

5. Create, compile and run an MPI “Hello world” program

Create: Using your favorite text editor (vi/vim, emacs, nedit, gedit, nano…) open a new file - call it whatever you’d like. It should do the following:

If you need help, see the provided example files mpi_hello.c or mpi_hello.f

Compile: Use a C or Fortran MPI compiler command. For example:

mpicc -w -o hello myhello.c
mpif77 -w -o hello myhello.f
mpif90 -w -o hello myhello.f

myhello.c, myhello.f represent your source file - use your actual source file name The -o compiler flag specifies the name for your executable The -w compiler flag is simply being used to suppress annoying warning messages.

When you get a clean compile, proceed.

Run: Use the srun command to run your MPI executable. Be sure to use the pReserved partition with 8 tasks on two different nodes. For example:

srun -N2 -n8 -ppReserved hello

You may see a message like below while the job gets ready to run:

srun: Job is in held state, pending scheduler release
srun: job 1139098 queued and waiting for resources

Did your job run successfully? Based on the output, did it behave as expected? If not, figure out any problems and fix them before proceeding.

Run your program a few more times, but vary the number of nodes and total tasks. Observe the task output statements to confirm.