Fixes a wrong answer from MPI_Ireduce when the red_sched_chain()
path was taken (which only happens for np<=4 and mesgsize>=64k).
The way libnbc treats MPI_IN_PLACE is to set sbuf == rbuf, and
whether an algorithm will work cleanly or not after that depends on the
details.
In this case the last steps of the algorithm amounted to
(right neighbor is sending us reduction results from ranks 1..n-1)
recv into rbuf from right neighbor
add the contribution from our sbuf into rbuf
this would be fine in general, but if sbuf==rbuf, that recv overwrites
the sbuf. I changed it to recv into a tmpbuf if MPI_IN_PLACE was used.
Signed-off-by: Geoffrey Paulsen <gpaulsen@us.ibm.com>
if sendbuf is equal to recvbuf, that should not be interpreted
as equivalent to MPI_IN_PLACE on the non root rank(s)
Thanks Valentin Petrov for the report
- correctly handle non commutative operators
- correctly handle non zero lower bound ddt
- correctly handle ddt with size > extent
- revamp NBC_Sched_op so it takes two buffers and matches ompi_op_reduce semantic
- various fix for inter communicators
Thanks Yuki Matsumoto for the report
This commit rewrites parts of libnbc to fix issues identified by
coverity and myself. The changes are as follows:
- libnbc function would return invalid error codes (internal to
libnbc) to the mpi layer. These codes names are of the form
NBC_. They do not match up with the error codes expected by the mpi
layer. I purged the use of all these error codes with the exception
of NBC_OK and NBC_CONTINUE in progress. These codes are used to
identify when a request handle is complete.
- Handles and schedules were leaked by all collective routines on
error. A new routine was added to return a collective handle
(NBC_Return_handle).
- Temporary buffers containting in/out neighbors for neighborhood
collectives were always leaked.
- Neigborhood collectives contained code to handle MPI_IN_PLACE which
is never a valid input for the send or receive buffer. Stipped this
code out.
- Files were inconsistently named. Most are nbc_isomething.c but one
was named coll_libnbc_ireduce_scatter_block.c.
- Made the NBC_Schedule "structure" and object so it can be
retained/released. This may enable the use of schedule caching at a
later time. More testing will be needed to ensure the caching code
works. If it doesn't the code should be stripped out completely.
- Added code to simply common case of scheduling send/recv +
barrier.
- Code cleanup for readability.
The code now passes the clang static analyzer.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The algorithms are intended for MPI-3.0 compliance and are not
optimized. We should aim to add better algorithms in the future through
cheetah.
MPI_Iallreduce and MPI_Igatherv on intercommunicators are required for
MPI_Comm_idup support.
cmr=v1.7.4:reviewer=brbarret:ticket=trac:2715
This commit was SVN r29333.
The following Trac tickets were found above:
Ticket 2715 --> https://svn.open-mpi.org/trac/ompi/ticket/2715
I know it does not make much sense but one can play around with the
performance. Numbers are available at http://www.unixer.de/research/nbcoll/perf/.
This is the first step towards collv2. Next step includes the addition
of non-blocking functions to the MPI-Layer and the collv1 interface.
It implements all MPI-1 collective algorithms in a non-blocking manner.
However, the collv1 interface does not allow non-blocking collectives so
that all collectives are used blocking by the ompi-glue layer.
I wanted to add LibNBC as a separate subdirectory, but I could not
convince the buildsystem (and had not the time). So the component looks
pretty messy. It would be great if somebody could explain me how to move
all nbc*{c,h}, and {hb,dict}*{c,h} to a seperate subdirectory.
It's .ompi_ignored because I did not test it exhaustively yet.
This commit was SVN r11401.