- added implementation and/or/xor operations for post and
fetch-op notations
- implemented basic and UCX transports, mxm added
NON-IMPLEMENTED wrapper
- updated C interfaces only (fortran will be added later)
- existing API is not updated to spec v1.4
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
- added common logging infrastructure for all
UCX modules
- all UCX modules are switched to new infra
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
- removed atomic-basic-specific operand from module API
- added own calls for add and swap operations
- minor optimization for UCX atomics
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
- some common functionality of del_procs calls is moved into
mca_common module
- blocking ucp_put call is replaced by non-blocking routine
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
- method mca_spml_ucx_get_mkey_slow is moved into .c module,
added pointer to this method into mca_spml_ucx_t structure
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
- some spml calls are marked as inline to exclude cross-module
dependency
- updated get-key call to get link to local module
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
There was a race condition in 35438ae9b5: if multiple threads invoked
ompi_mpi_init() simultaneously (which could happen from both MPI and
OSHMEM), the code did not catch this condition -- Bad Things would
happen.
Now use an atomic cmp/set to ensure that only one thread is able to
advance ompi_mpi_init from NOT_INITIALIZED to INIT_STARTED.
Additionally, change the prototype of ompi_mpi_init() so that
oshmem_init() can safely invoke ompi_mpi_init() multiple times (as
long as MPI_FINALIZE has not started) without displaying an error. If
multiple threads invoke oshmem_init() simultaneously, one of them will
actually do the initialization, and the rest will loop waiting for it
to complete.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Per MPI-3.1:8.7.1 p361:11-13, it's valid for MPI_FINALIZED to be
invoked during an attribute destruction callback (e.g., during the
destruction of keyvals on MPI_COMM_SELF during the very beginning of
MPI_FINALIZE). In such cases, MPI_FINALIZED must return "false".
Prior to this commit, we hung in FINALIZED if it were invoked during
a COMM_SELF attribute destruction callback in FINALIZE. See
https://github.com/open-mpi/ompi/issues/5084.
This commit converts the MPI_INITIALIZED / MPI_FINALIZED
infrastructure to use a single enum (ompi_mpi_state, set atomically)
to represent the state of MPI:
- not initialized
- init started
- init completed
- finalize started
- finalize past COMM_SELF destruction
- finalize completed
The "finalize past COMM_SELF destruction" state is what allows us to
return "false" from MPI_FINALIZED before COMM_SELF has been fully
destroyed / all attribute callbacks have been invoked.
Since this state is checked at nearly every MPI API call (to see if
we're outside of the INIT/FINALIZE epoch), care was taken to use
atomics to *set* the ompi_mpi_state value in ompi_mpi_init() and
ompi_mpi_finalize(), but performance-critical code paths can simply
read the variable without needing to use a slow call to an
opal_atomic_*() function.
Thanks to @AndrewGaspar for reporting the issue.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
- added opal_progress call to wait function to avoid
possible [dead]lock issues
- wait call is declared as inline
- minor fixes
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>
+ Add quiet method to SPML, so it can have different implementation with
fence.
+ Use ucp_worker_fence for spml_fence method of UCX SPML
Signed-off-by: Mikhail Brinskii <mikhailb@mellanox.com>
This patch disables the oshmem layer if there are no SPMLs that
will build. With the limited set of SPMLs available to support
oshmem, many builds end up installing an oshmem library that we
know will not work. There has been a bit of customer confusion
over oshmem, hopefully this will lead customers in the right
direction.
Signed-off-by: Brian Barrett <bbarrett@amazon.com>