M P I = Message Passing Interface
MPI is a specification for the developers and users of message passing libraries.
MPI addresses the message-passing parallel programming model: data is moved from the address space of one process to that of another process through cooperative operations on each process.
The goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be:
The MPI standard has gone through a number of revisions, with the most recent version being MPI-5.0 (This tutorial will be focused on MPI 4.2, the version widely available on LC systems)
Interface specifications have been defined for C and Fortran90 language bindings:
Actual MPI library implementations differ in which version and features of the MPI standard they support. Developers/users will need to be aware of this.
Originally, MPI was designed for distributed memory architectures, which were becoming increasingly popular at that time (1980s - early 1990s).
As architecture trends changed, shared memory SMPs were combined over networks creating hybrid distributed memory / shared memory systems.
MPI implementors adapted their libraries to handle both types of underlying memory architectures seamlessly. They also adapted/developed ways of handling different interconnects and protocols.
Today, MPI runs on virtually any hardware platform:
The programming model clearly remains a distributed memory model however, regardless of the underlying physical architecture of the machine.
All parallelism is explicit: the programmer is responsible for correctly identifying parallelism and implementing parallel algorithms using MPI constructs.
MPI has resulted from the efforts of numerous individuals and groups that began in 1992. Some history:
Documentation for all versions of the MPI standard is available at: http://www.mpi-forum.org/docs/