For the MPI executable, use the special workshop pool and 8 tasks. For example:
srun -n8 -ppReserved mpi_array
Note: The srun command is covered in detail in the “Starting Jobs” section of the Linux Clusters Overview tutorial, located here. There is also a man page.
5. Compare other serial codes to their parallel version
If we had more time, you might even be able to start with a serial code or two and create your own parallel version. Feel free to try if you’d like.
6. Try any/all of the other MPI example codes
First, review the code(s) so that you understand how MPI is being used.
Then, using the MPI compiler command(s) of your choice, compile the codes of interest.
For convenience, the included Makefiles can be used to compile any or all of the exercise codes. For example:
make -f Makefile.MPI.c
make -f Makefile.MPI.c mpi_mm
make -f Makefile.Ser.c
make -f Makefile.MPI.f
make -f Makefile.MPI.f mpi_mm
make -f Makefile.Ser.f
Note: you can change the compiler being used by editing the Makefile.
Run the executables interactively in the special workshop pool. Use the srun command for this as shown previously.
Most of the executables only need 4 MPI tasks or less. Some exceptions and notes:
Requires an even number of tasks.
Requires 16 MPI tasks
Requires 8 MPI tasks
These examples attempt to generate an Xwindows display for results. You will need to make sure that your Xwindows environment and software is setup correctly if you want to see the graphic. Ask the instructor if you have any questions.
Requires only 2 MPI tasks that should be on DIFFERENT nodes
Some things to try:
Experiment with compiler flags (see respective man pages).
Vary the number of tasks and nodes used.
7. Compare per task and aggregate communications bandwidths
Compile the mpi_bandwidth code if you haven’t already.
Run the code interactively with 2 tasks on two different nodes:
srun -N2 -n2 -ppReserved mpi_bandwidth
Note the overall average bandwidth for the largest message size of 1,000,000 bytes.
Now run the code interactively with 4, 8, 16, 32 and 64 tasks on two different nodes:
Explanation (Click to expand!)
As the number of tasks increase, the per task bandwidth decreases because they must compete for use of the network adapter. Aggregate bandwidth will increase until it plateaus.
8. Compare blocking send/receive with non-blocking send/receive
Copy your mpi_bandwidth source file to another file called mpi_bandwidthNB. Modify your new file so that it performs non-blocking sends/receives instead of blocking. An example mpi_bandwidth_nonblock file has been provided in case you need it.
After you’re satisfied with your new non-blocking version of the bandwidth code, compile both.
Run each code using two tasks on two different nodes in the special workshop pool:
Explanation (Click to expand!)
Non-blocking send/receive operations are often significantly faster than blocking send/receive operations.
9. When things go wrong…
There are many things that can go wrong when developing MPI programs. The mpi_bug series of programs demonstrate just a few. See if you can figure out what the problem is with each case and then fix it.
Compile with the compile command(s) of your choice and run interactively using 4 tasks in the special workshop pool.
The buggy behavior will differ for each example. Some hints are provided below.
Hints (Click to expand!)
mpi_bug1 demonstrates how miscoding even a simple parameter like a message tag can lead to a hung program. Verify that the message sent from task 0 is not exactly what task 1 is expecting and vice versa. Matching the send tags with the receive tags solves the problem.
Wrong results or abnormal termination
mpi_bug2 shows another type of miscoding. The data type of the message sent by task 0 is not what task 1 expects. Nevertheless, the message is received, resulting in wrong results or abnormal termination - depending upon the MPI library and platform. Matching the send data type with the receive data type solves the problem.
Error message and/or abnormal termination
mpi_bug3 shows what happens when the MPI environment is not initialized or terminated properly. Inserting the MPI init and finalize calls in the right locations will solve the problem.
Gives the wrong result for "Final sum". Compare to mpi_array
Number of MPI tasks must be divisible by 4; mpi_bug4 shows what happens when a task does not participate in a collective communication call. In this case, task 0 needs to call MPI_Reduce as the other tasks do.
Dies or hangs - depends upon platform and MPI library
mpi_bug5 demonstrates an unsafe program, because sometimes it will execute fine, and other times it will fail. The reason why the program fails or hangs is due to buffer exhaustion on the receiving task side, as a consequence of the way an MPI library has implemented an eager protocol for messages of a certain size. One possible solution is to include an MPI_Barrier call in the both the send and receive loops.
Terminates or is ignored (depends on platform/language)
Requires 4 MPI tasks; mpi_bug6 has a bug that will terminate the program in some cases but be ignored in other cases. The problem is that task 2 performs a blocking operation, but then hits the MPI_Wait call near the end of the program. Only the tasks that make non-blocking calls should hit the MPI_Wait. The coding error in this case is easy to fix - simply make sure task 2 does not encounter the MPI_Wait call.
mpi_bug7 performs a collective communication broadcast but erroneously codes the count argument incorrectly resulting in a hang condition.
If you’re just finishing the tutorial and haven’t filled out our evaluation form yet, please do!
Lawrence Livermore National Laboratory
7000 East Avenue • Livermore, CA 94550 | LLNL-WEB-458451
Operated by the Lawrence Livermore National Security, LLC for the
Department of Energy's National Nuclear Security Administration
Learn about the Department of Energy's Vulnerability Disclosure Program