1
1

README: minor re-flowing on extra-long lines

No other content changes; just re-flowing of long lines.
Этот коммит содержится в:
Jeff Squyres 2015-08-25 09:53:25 -04:00
родитель 6f2e8d2073
Коммит e2124c61fe

67
README
Просмотреть файл

@ -436,8 +436,8 @@ General Run-Time Support Notes
MPI Functionality and Features MPI Functionality and Features
------------------------------ ------------------------------
- Rank reordering support is available using the TreeMatch library. It is activated - Rank reordering support is available using the TreeMatch library. It
for the graph and dist_graph topologies. is activated for the graph and dist_graph topologies.
- All MPI-3 functionality is supported. - All MPI-3 functionality is supported.
@ -532,37 +532,39 @@ MPI Collectives
MPI process onto Mellanox QDR InfiniBand switch CPUs and HCAs. MPI process onto Mellanox QDR InfiniBand switch CPUs and HCAs.
- The "ML" coll component is an implementation of MPI collective - The "ML" coll component is an implementation of MPI collective
operations that takes advantage of communication hierarchies operations that takes advantage of communication hierarchies in
in modern systems. A ML collective operation is implemented by modern systems. A ML collective operation is implemented by
combining multiple independently progressing collective primitives combining multiple independently progressing collective primitives
implemented over different communication hierarchies, hence a ML implemented over different communication hierarchies, hence a ML
collective operation is also referred to as a hierarchical collective collective operation is also referred to as a hierarchical
operation. The number of collective primitives that are included in a collective operation. The number of collective primitives that are
ML collective operation is a function of subgroups(hierarchies). included in a ML collective operation is a function of
Typically, MPI processes in a single communication hierarchy such as subgroups(hierarchies). Typically, MPI processes in a single
CPU socket, node, or subnet are grouped together into a single subgroup communication hierarchy such as CPU socket, node, or subnet are
(hierarchy). The number of subgroups are configurable at runtime, grouped together into a single subgroup (hierarchy). The number of
and each different collective operation could be configured to have subgroups are configurable at runtime, and each different collective
a different of number of subgroups. operation could be configured to have a different of number of
subgroups.
The component frameworks and components used by/required for a The component frameworks and components used by/required for a
"ML" collective operation. "ML" collective operation.
Frameworks: Frameworks:
* "sbgp" - Provides functionality for grouping processes into subgroups * "sbgp" - Provides functionality for grouping processes into
subgroups
* "bcol" - Provides collective primitives optimized for a particular * "bcol" - Provides collective primitives optimized for a particular
communication hierarchy communication hierarchy
Components: Components:
* sbgp components - Provides grouping functionality over a CPU socket * sbgp components - Provides grouping functionality over a CPU
("basesocket"), shared memory ("basesmuma"), socket ("basesocket"), shared memory
Mellanox's ConnectX HCA ("ibnet"), and other ("basesmuma"), Mellanox's ConnectX HCA
interconnects supported by PML ("p2p") ("ibnet"), and other interconnects supported by
PML ("p2p")
* BCOL components - Provides optimized collective primitives for * BCOL components - Provides optimized collective primitives for
shared memory ("basesmuma"), Mellanox's ConnectX shared memory ("basesmuma"), Mellanox's ConnectX
HCA ("iboffload"), and other interconnects supported HCA ("iboffload"), and other interconnects
by PML ("ptpcoll") supported by PML ("ptpcoll")
- The "cuda" coll component provides CUDA-aware support for the - The "cuda" coll component provides CUDA-aware support for the
reduction type collectives with GPU buffers. This component is only reduction type collectives with GPU buffers. This component is only
@ -1002,10 +1004,11 @@ RUN-TIME SYSTEM SUPPORT
most cases. This option is only needed for special configurations. most cases. This option is only needed for special configurations.
--with-pmi --with-pmi
Build PMI support (by default on non-Cray XE/XC systems, it is not built). Build PMI support (by default on non-Cray XE/XC systems, it is not
On Cray XE/XC systems, the location of pmi is detected automatically as built). On Cray XE/XC systems, the location of pmi is detected
part of the configure process. For non-Cray systems, if the pmi2.h header automatically as part of the configure process. For non-Cray
is found in addition to pmi.h, then support for PMI2 will be built. systems, if the pmi2.h header is found in addition to pmi.h, then
support for PMI2 will be built.
--with-slurm --with-slurm
Force the building of SLURM scheduler support. Force the building of SLURM scheduler support.
@ -1635,9 +1638,9 @@ Open MPI API Extensions
----------------------- -----------------------
Open MPI contains a framework for extending the MPI API that is Open MPI contains a framework for extending the MPI API that is
available to applications. Each extension is usually a standalone set of available to applications. Each extension is usually a standalone set
functionality that is distinct from other extensions (similar to how of functionality that is distinct from other extensions (similar to
Open MPI's plugins are usually unrelated to each other). These how Open MPI's plugins are usually unrelated to each other). These
extensions provide new functions and/or constants that are available extensions provide new functions and/or constants that are available
to MPI applications. to MPI applications.
@ -1955,9 +1958,9 @@ Here's how the three sub-groups are defined:
get their MPI/OSHMEM application to run correctly. get their MPI/OSHMEM application to run correctly.
2. Application tuner: Generally, these are parameters that can be 2. Application tuner: Generally, these are parameters that can be
used to tweak MPI application performance. used to tweak MPI application performance.
3. MPI/OSHMEM developer: Parameters that either don't fit in the other two, 3. MPI/OSHMEM developer: Parameters that either don't fit in the
or are specifically intended for debugging / development of Open other two, or are specifically intended for debugging /
MPI itself. development of Open MPI itself.
Each sub-group is broken down into three classifications: Each sub-group is broken down into three classifications: