The `nbc_i*` functions don't start communication, but create a request.
`nbc_*_init` are appropriate names for them.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
Persistent operation for `NBC_A2A_DISS` is not supported currently.
Though the algorithm is not selected at all currently, I put an
assertion not to select it by mistake.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
`NBC_Copy` shoud not be called in `MPI_*_INIT`.
`NBC_Sched_copy` should be called instead.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
Because a persistent reuqest does not free its `schedule` object
when the communication completes, the `NBC_Progress` function cannot
determine the completion using `schedule`.
Without this change, a hang occurs when the `NBC_Progress` function
is called recursively through the `NBC_Start_round` function.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
prepare the upcoming persistent collectives by pre-factoring some code
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
fixup 808c3c62cd9475edd91ecde9d2d53b12e28b2c04
This commit fixes a flaw in the progress function for libnbc. The
function was unconditionally taking a lock even if there are no
requests to process. This lock was showing up in vtune traces of
multi-threaded benchmarks.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit renames the arithmetic atomic operations in opal to
indicate that they return the new value not the old value. This naming
differentiates these routines from new functions that return the old
value.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
* Resolves#3705
* Components should link against the project level library to better
support `dlopen` with `RTLD_LOCAL`.
* Extend the `mca_FRAMEWORK_COMPONENT_la_LIBADD` in the `Makefile.am`
with the appropriate project level library:
```
MCA components in ompi/
$(top_builddir)/ompi/lib@OMPI_LIBMPI_NAME@.la
MCA components in orte/
$(top_builddir)/orte/lib@ORTE_LIB_PREFIX@open-rte.la
MCA components in opal/
$(top_builddir)/opal/lib@OPAL_LIB_PREFIX@open-pal.la
MCA components in oshmem/
$(top_builddir)/oshmem/liboshmem.la"
```
Note: The changes in this commit were automated by the script in
the commit that proceeds it with the `libadd_mca_comp_update.py`
script. Some components were not included in this change because
they are statically built only.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
make NBC_Handle (almost) an internal structure created
by NBC_Schedule_request()
use a local variable instead of what was previously handle->tmpbuf
Refs open-mpi/ompi#3487
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
since Open MPI now requires a C99, and ptrdiff_t type is part of C99,
there is no more need for the abstract OPAL_PTRDIFF_TYPE type.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
* Since we are adding a new function to `mca_coll_base_module_2_1_0_t`
we need to increase the version of the module structure to `2_2_0`.
* Add a comment just above the PREDEFINED_COMMUNICATOR_PAD describing
it's purpose and when it should change. To help future developers
trying to answer the question noted in the comment.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
Fixes a wrong answer from MPI_Ireduce when the red_sched_chain()
path was taken (which only happens for np<=4 and mesgsize>=64k).
The way libnbc treats MPI_IN_PLACE is to set sbuf == rbuf, and
whether an algorithm will work cleanly or not after that depends on the
details.
In this case the last steps of the algorithm amounted to
(right neighbor is sending us reduction results from ranks 1..n-1)
recv into rbuf from right neighbor
add the contribution from our sbuf into rbuf
this would be fine in general, but if sbuf==rbuf, that recv overwrites
the sbuf. I changed it to recv into a tmpbuf if MPI_IN_PLACE was used.
Signed-off-by: Geoffrey Paulsen <gpaulsen@us.ibm.com>
protect the mca_coll_libnbc_component.active_requests list with
the new mca_coll_libnbc_component.lock mutex.
Thanks Jie Hu for the report
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
* If (legal) non-uniform data type signatures are used in ibcast
then the chosen algorithm may fail on the request, and worst case
it could produce wrong answers.
* Add an MCA parameter that, by default, protects the user from this
scenario. If the user really wants to use it then they have to
'opt-in' by setting the following parameter to false:
- `-mca coll_libnbc_ibcast_skip_dt_decision f`
* Once the following Issues are resolved then this parameter can
be removed.
- https://github.com/open-mpi/ompi/issues/2256
- https://github.com/open-mpi/ompi/issues/1763
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
* If an error is detected internal to libnbc (e.g., PML truncation error)
this patch makes sure that the request is completed and the `MPI_ERROR`
field is set approprately.
* Make an attempt to cleanup outstanding requests before returning.
- This is a "best attempt" since not all PMLs support canceling requests.
In order to optimize for MPI_IN_PLACE, data is sent from the receive buffer.
consequently, it should be sent with the receive type and count.
Thanks Josh Hursey for the report and test case
Refs open-mpi/ompi#2256
if sendbuf is equal to recvbuf, that should not be interpreted
as equivalent to MPI_IN_PLACE on the non root rank(s)
Thanks Valentin Petrov for the report
predefined datatypes such as MPI_LONG_DOUBLE_INT are not really contiguous,
so use span as returned by opal_datatype_span() instead of type extent,
otherwise data might be written above allocated memory.
Thanks Valentin Petrov for the report
as reported by Coverity with CIDs 1363349-1363362
Offset temporary buffer when a non zero lower bound datatype is used.
Thanks Hristo Iliev for the report
(cherry picked from commit 0e393195d9f2373ffa9d59a240092f643117cd39)
- correctly handle non commutative operators
- correctly handle non zero lower bound ddt
- correctly handle ddt with size > extent
- revamp NBC_Sched_op so it takes two buffers and matches ompi_op_reduce semantic
- various fix for inter communicators
Thanks Yuki Matsumoto for the report
* Remodel the request.
Added the wait sync primitive and integrate it into the PML and MTL
infrastructure. The multi-threaded requests are now significantly
less heavy and less noisy (only the threads associated with completed
requests are signaled).
* Fix the condition to release the request.