4
–
Running MPI on QLogic Adapters
Open MPI and Hybrid MPI/OpenMP Applications
IB0054606-02 A
4-21
Open MPI and Hybrid MPI/OpenMP Applications
Open MPI supports hybrid MPI/OpenMP applications, provided that MPI routines
are called only by the master OpenMP thread. This application is called the
funneled thread model
. Instead of
MPI_Init/MPI_INIT
(for C/C++ and Fortran
respectively), the program can call
MPI_Init_thread/MPI_INIT_THREAD
to
determine the level of thread support, and the value
MPI_THREAD_FUNNELED
will
be returned.
To use this feature, the application must be compiled with both OpenMP and MPI
code enabled. To do this, use the
-openmp
or
-mp
flag (depending on your
compiler) on the
mpicc
compile line.
As mentioned previously, MPI routines can be called only by the master OpenMP
thread. The hybrid executable is executed as usual using
mpirun
, but typically
only one MPI process is run per node and the OpenMP library will create
additional threads to utilize all CPUs on that node. If there are sufficient CPUs on
a node, you may want to run multiple MPI processes and multiple OpenMP
threads per node.
The number of OpenMP threads is typically controlled by the
OMP_NUM_THREADS
environment variable in the
.
bashrc
file. (
OMP_NUM_THREADS
is used by other
compilers’ OpenMP products, but is not an Open MPI environment variable.) Use
this variable to adjust the split between MPI processes and OpenMP threads.
Usually, the number of MPI processes (per node) times the number of OpenMP
threads will be set to match the number of CPUs per node. An example case
would be a node with four CPUs, running one MPI process and four OpenMP
threads. In this case,
OMP_NUM_THREADS
is set to four.
OMP_NUM_THREADS
is on
a per-node basis.
See
“Environment for Node Programs” on page 4-15
for information on setting
environment variables.
Summary of Contents for OFED+ Host
Page 1: ...IB0054606 02 A OFED Host Software Release 1 5 4 User Guide...
Page 14: ...xiv IB0054606 02 A OFED Host Software Release 1 5 4 User Guide...
Page 22: ...1 Introduction Interoperability 1 4 IB0054606 02 A...
Page 96: ...4 Running MPI on QLogic Adapters Debugging MPI Programs 4 24 IB0054606 02 A...
Page 140: ...6 SHMEM Description and Configuration SHMEM Benchmark Programs 6 32 IB0054606 02 A...
Page 148: ...8 Dispersive Routing 8 4 IB0054606 02 A...
Page 164: ...9 gPXE HTTP Boot Setup 9 16 IB0054606 02 A...
Page 176: ...A Benchmark Programs Benchmark 3 Messaging Rate Microbenchmarks A 12 IB0054606 02 A...
Page 202: ...B SRP Configuration OFED SRP Configuration B 26 IB0054606 02 A Notes...
Page 206: ...C Integration with a Batch Queuing System Clean up PSM Shared Memory Files C 4 IB0054606 02 A...
Page 238: ...E ULP Troubleshooting Troubleshooting SRP Issues E 20 IB0054606 02 A...
Page 242: ...F Write Combining Verify Write Combining is Working F 4 IB0054606 02 A Notes...
Page 280: ...G Commands and Files Summary of Configuration Files G 38 IB0054606 02 A...
Page 283: ......