Merge pull request #6026 from jsquyres/pr/ompi-4.0.0-text-updates
v4.0.0 text updates
Этот коммит содержится в:
Коммит
e3eb01fd18
93
README
93
README
@ -479,6 +479,56 @@ MPI Functionality and Features
|
||||
|
||||
- All MPI-3 functionality is supported.
|
||||
|
||||
- Note that starting with Open MPI v4.0.0, prototypes for several
|
||||
legacy MPI-1 symbols that were deleted in the MPI-3.0 specification
|
||||
(which was published in 2012) are no longer available by default in
|
||||
mpi.h. Specifically, several MPI-1 symbols were deprecated in the
|
||||
1996 publishing of the MPI-2.0 specification. These deprecated
|
||||
symbols were eventually removed from the MPI-3.0 specification in
|
||||
2012.
|
||||
|
||||
The symbols that now no longer appear by default in Open MPI's mpi.h
|
||||
are:
|
||||
|
||||
- MPI_Address (replaced by MPI_Get_address)
|
||||
- MPI_Errhandler_create (replaced by MPI_Comm_create_errhandler)
|
||||
- MPI_Errhandler_get (replaced by MPI_Comm_get_errhandler)
|
||||
- MPI_Errhandler_set (replaced by MPI_Comm_set_errhandler)
|
||||
- MPI_Type_extent (replaced by MPI_Type_get_extent)
|
||||
- MPI_Type_hindexed (replaced by MPI_Type_create_hindexed)
|
||||
- MPI_Type_hvector (replaced by MPI_Type_create_hvector)
|
||||
- MPI_Type_lb (replaced by MPI_Type_get_extent)
|
||||
- MPI_Type_struct (replaced by MPI_Type_create_struct)
|
||||
- MPI_Type_ub (replaced by MPI_Type_get_extent)
|
||||
- MPI_LB (replaced by MPI_Type_create_resized)
|
||||
- MPI_UB (replaced by MPI_Type_create_resized)
|
||||
- MPI_COMBINER_HINDEXED_INTEGER
|
||||
- MPI_COMBINER_HVECTOR_INTEGER
|
||||
- MPI_COMBINER_STRUCT_INTEGER
|
||||
- MPI_Handler_function (replaced by MPI_Comm_errhandler_function)
|
||||
|
||||
Although these symbols are no longer prototyped in mpi.h, they
|
||||
are still present in the MPI library in Open MPI v4.0.x. This
|
||||
enables legacy MPI applications to link and run successfully with
|
||||
Open MPI v4.0.x, even though they will fail to compile.
|
||||
|
||||
*** Future releases of Open MPI beyond the v4.0.x series may
|
||||
remove these symbols altogether.
|
||||
|
||||
*** The Open MPI team STRONGLY encourages all MPI application
|
||||
developers to stop using these constructs that were first
|
||||
deprecated over 20 years ago, and finally removed from the MPI
|
||||
specification in MPI-3.0 (in 2012).
|
||||
|
||||
*** The Open MPI FAQ (https://www.open-mpi.org/faq/) contains
|
||||
examples of how to update legacy MPI applications using these
|
||||
deleted symbols to use the "new" symbols.
|
||||
|
||||
All that being said, if you are unable to immediately update your
|
||||
application to stop using these legacy MPI-1 symbols, you can
|
||||
re-enable them in mpi.h by configuring Open MPI with the
|
||||
--enable-mpi-compatibility flag.
|
||||
|
||||
- Rank reordering support is available using the TreeMatch library. It
|
||||
is activated for the graph and dist_graph topologies.
|
||||
|
||||
@ -706,6 +756,32 @@ Network Support
|
||||
mechanisms for Open MPI to utilize single-copy semantics for shared
|
||||
memory.
|
||||
|
||||
- In prior versions of Open MPI, InfiniBand and RoCE support was
|
||||
provided through the openib BTL and ob1 PML plugins. Starting with
|
||||
Open MPI 4.0.0, InfiniBand support through the openib+ob1 plugins is
|
||||
both deprecated and superseded by the UCX PML component.
|
||||
|
||||
UCX is an open-source optimized communication library which supports
|
||||
multiple networks, including RoCE, InfiniBand, uGNI, TCP, shared
|
||||
memory, and others.
|
||||
|
||||
While the openib BTL depended on libibverbs, the UCX PML depends on
|
||||
the UCX library. The UCX library can be downloaded from
|
||||
http://www.openucx.org/ or from various Linux distribution
|
||||
repositories (e.g., Fedora/RedHat yum repositories). The UCX
|
||||
library is also part of Mellanox OFED and Mellanox HPC-X binary
|
||||
distributions.
|
||||
|
||||
Once installed, Open MPI can be built with UCX support by adding
|
||||
--with-ucx to the Open MPI configure command. Once Open MPI is
|
||||
configured to use UCX, the runtime will automatically select the UCX
|
||||
PML if one of the supported networks is detected (e.g., InfiniBand).
|
||||
It's possible to force using UCX in the mpirun or oshrun command
|
||||
lines by specifying any or all of the following mca parameters:
|
||||
"-mca pml ucx" for MPI point-to-point operations, "-mca spml ucx"
|
||||
for OpenSHMEM support, and "-mca osc ucx" for MPI RMA (one-sided)
|
||||
operations.
|
||||
|
||||
Open MPI Extensions
|
||||
-------------------
|
||||
|
||||
@ -1018,6 +1094,19 @@ NETWORKING SUPPORT / OPTIONS
|
||||
covers most cases. This option is only needed for special
|
||||
configurations.
|
||||
|
||||
--with-ucx=<directory>
|
||||
Specify the directory where the UCX libraries and header files are
|
||||
located. This option is generally only necessary if the UCX headers
|
||||
and libraries are not in default compiler/linker search paths.
|
||||
|
||||
--with-ucx-libdir=<directory>
|
||||
Look in directory for the UCX libraries. By default, Open MPI will
|
||||
look in <ucx_directory>/lib and <ucx_ directory>/lib64, which covers
|
||||
most cases. This option is only needed for special configurations.
|
||||
|
||||
--with-usnic
|
||||
Abort configure if Cisco usNIC support cannot be built.
|
||||
|
||||
--with-verbs=<directory>
|
||||
Specify the directory where the verbs (also known as OpenFabrics
|
||||
verbs, or Linux verbs, and previously known as OpenIB) libraries and
|
||||
@ -1063,8 +1152,6 @@ NETWORKING SUPPORT / OPTIONS
|
||||
package, configure will safely abort with a helpful message telling
|
||||
you that you should not use --with-verbs-usnic.
|
||||
|
||||
--with-usnic
|
||||
Abort configure if Cisco usNIC support cannot be built.
|
||||
|
||||
RUN-TIME SYSTEM SUPPORT
|
||||
|
||||
@ -2032,7 +2119,7 @@ timer - High-resolution timers
|
||||
Each framework typically has one or more components that are used at
|
||||
run-time. For example, the btl framework is used by the MPI layer to
|
||||
send bytes across different types underlying networks. The tcp btl,
|
||||
for example, sends messages across TCP-based networks; the openib btl
|
||||
for example, sends messages across TCP-based networks; the UCX PML
|
||||
sends messages across OpenFabrics-based networks.
|
||||
|
||||
Each component typically has some tunable parameters that can be
|
||||
|
@ -84,20 +84,11 @@ MPI_COMBINER_NAMED a named predefined data type
|
||||
MPI_COMBINER_DUP MPI_Type_dup
|
||||
MPI_COMBINER_CONTIGUOUS MPI_Type_contiguous
|
||||
MPI_COMBINER_VECTOR MPI_Type_vector
|
||||
MPI_COMBINER_HVECTOR_INTEGER MPI_Type_hvector from Fortran
|
||||
MPI_COMBINER_HVECTOR MPI_Type_hvector from C or C++
|
||||
and MPI_Type_create for
|
||||
all languages
|
||||
MPI_COMBINER_HVECTOR MPI_Type_hvector
|
||||
MPI_COMBINER_INDEXED MPI_Type_indexed
|
||||
MPI_COMBINER_HINDEXED_INTEGER MPI_Type_hindexed from Fortran
|
||||
MPI_COMBINER_HINDEXED MPI_Type_hindexed from C or C++
|
||||
and MPI_Type_create_hindexed
|
||||
for all languages
|
||||
MPI_COMBINER_HINDEXED MPI_Type_hindexed
|
||||
MPI_COMBINER_INDEXED_BLOCK MPI_Type_create_indexed_block
|
||||
MPI_COMBINER_STRUCT_INTEGER MPI_Type_struct from Fortran
|
||||
MPI_COMBINER_STRUCT MPI_Type_struct from C or C++
|
||||
and MPI_Type_create_struct
|
||||
for all languages
|
||||
MPI_COMBINER_STRUCT MPI_Type_struct
|
||||
MPI_COMBINER_SUBARRAY MPI_Type_create_subarray
|
||||
MPI_COMBINER_DARRAY MPI_Type_create_darray
|
||||
MPI_COMBINER_F90_REAL MPI_Type_create_f90_real
|
||||
|
Загрузка…
x
Ссылка в новой задаче
Block a user