Difference between revisions of "Ada:Compile:MPI"
Line 152: | Line 152: | ||
+ | Running a hybrid program is very similar to running a pure mpi program. To control the number of OpenMP threads to use per task the '''OMP_NUM_THREADS''' environmental variable can be set. | ||
+ | |||
+ | '''Example 6:''' Run the hybrid program named hybrid.x, use 8 tasks and every task will use 2 threads in its OpenMP regions. | ||
+ | <pre> | ||
+ | export OMP_NUM_THREADS=2 | ||
+ | mpirun -np 8 ./hybrid.x | ||
+ | </pre> | ||
[[Category:Ada]] | [[Category:Ada]] |
Revision as of 18:44, 27 December 2014
MPI Code
There are currently two MPI stacks installed on ADA; OpenMPI and Intel MPI. The recommended MPI stack for software development is the Intel MPI software stack and most of this section will focus on this MPI stack.
Compiling MPI Code
To compile MPI code a MPI compiler wrapper is used. The wrapper will call the appropriate underlying compiler with additional linker flags specific for MPI programs. The Intel MPI software stack has wrappers for Intel compilers as well as wrappers for gnu compilers. Any argument not recognized by the wrapper will be passed to the underlying compiler. Therefore, any valid compiler flag (Intel or gnu) will also work when using the mpi wrappers
The following table shows the most commonly used mpi wrappers used by Intel MPI.
MPI wrapper | Compiler | Language | Example |
---|---|---|---|
mpiicc | icc | C | mpiicc <compiler_flags> prog.c |
mpicc | gcc | C | mpicc <compiler_flags> prog.c |
mpiicpc | icpc | C++ | mpiicpcp <compiler_flags> prog.cpp |
mpicxx | g++ | C++ | mpicxx <compiler_flags> prog.cpp |
mpiifort | ifort | Fortran | mpiifort <compiler_flags> prog.f90 |
mpif90 | gfortran | Fortran | mpif90 <compiler_flags> prog.f90 |
Example 1: compile MPI program named mpi_prog.c, written in C, and name it mpi_prog.x. Use the underlying Intel compiler with -O3 optimization
-bash-4.1$ mpiicc -o mpi_prog.x -O3 mpi_prog.c
Example 2: same as Example 1, but this time use underlying gnu Fortran compiler.
-bash-4.1$ mpif90 -o mpi_prog.x mpi_prog.f90
To see the full compiler command of any of the mpi wrapper scripts use the **-show** flag. This flag does not actually call the compiler, it only prints the full compiler command and exits. This can be useful for debugging purposes and/or when experiencing problems with any of the compiler wrappers
Example: Show the full compiler command for the mpiifort wrapper script
-bash-4.1$ mpiifort -show ifort -I/software/easybuild/software/impi/4.1.3.049/intel64/include -I/software/easybuild/software/impi/4.1.3.049/intel64/include -L/software/easybuild/software/impi/4.1.3.049/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /software/easybuild/software/impi/4.1.3.049/intel64/lib -Xlinker -rpath -Xlinker -/opt/intel/mpi-rt/4.1 -lmpigf -lmpi -lmpigi -ldl -lrt -lpthread`
Running MPI Code
Running mpi code requires an mpi launcher. The latter will setup the environment and start the requested number of mpi tasks on the needed nodes.
Use the following command to launch an mpi program (We continue to assume here use of the Intel MPI stack.):
mpirun [mpi_flags] <executable> [executable params]
where [mpi_flags] are options passed to the mpi launcher, <executable> is the name of the mpi program and [executable params] are optional parameters for the mpi program.
NOTE: <executable> must be on the $PATH otherwise the launcher will not be able to find the executable.
For a list of the most common mpi_flags See table below. This table shows only a very small subset of all flags. To see a full listing type mpirun --help
Flag | Description |
---|---|
-np <n> | The number of mpi tasks to start. |
-n <n> | The number of mpi tasks to start (same as -np). |
-perhost <n> | Places <n> consecutive (MPI) processes on each host/node. |
-ppn <n> | Stands for Process (i.e., task) Per Node (same as -perhost) |
-hostfile <file> | The name of the file that contains the list of host/node names the launcher will place tasks on. |
-f <file> | Same as -hostfile |
-hosts {host list} | comma separated list of specific host/node names. |
-help | Shows list of available flags and options |
Example 1: Run mpi program on local host using 4 tasks
-bash-4.1$ mpirun -np 4 mpi_prog.x
Example 2: Run mpi program on a specific host, using 4 tasks
-bash-4.1$ mpirun -np 4 -hosts login1 mpi_prog.x
Example 3: Run mpi program on two different hosts, using 4 tasks using host file, assign tasks in round robin fashion
-bash-4.1$ mpirun -np 4 -pernode 1 -hostfile mylist mpi_prog.x
where mylist is a file that contains the following lines:
login1 login2
NOTE: If you don't specify -pernode all the tasks will be started on login1, even though the hostfile contains multiple entries.
Example 4: Run 4 different programs concurrently using mpirun (MPMD style program)
mpirun prog1.x : prog2.x : prog3.x prog4.x
Hybrid MPI/OpenMP Code
To compile hybrid mpi/OpenMP programs (i.e. MPI programs that also contain OpenMP directives) invoke the appropriate mpi wrapper and add the -openmp flag to enable processing of OpenMP primitives.
Example 5: Compile MPI fortran program named hybrid.f90 that also contains OpenMP primitives, use underlying Intel Fortran compiler
mpiifort -openmp -o hybrid.x hybrid.f90
Running a hybrid program is very similar to running a pure mpi program. To control the number of OpenMP threads to use per task the OMP_NUM_THREADS environmental variable can be set.
Example 6: Run the hybrid program named hybrid.x, use 8 tasks and every task will use 2 threads in its OpenMP regions.
export OMP_NUM_THREADS=2 mpirun -np 8 ./hybrid.x