Signed-off-by: William Zhang <wilzhang@amazon.com>
cr https://code.amazon.com/reviews/CR-23837553
(cherry picked from commit 771f9c011d2a4daf78a4b26f88c971b3868fe132)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Signed-off-by: William Zhang <wilzhang@amazon.com>
(cherry picked from commit 50640402ab5765a0dfde71628adfbbaa686555bd)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Signed-off-by: Alex Anenkov <anenkov.ru@gmail.com>
(cherry picked from commit 77d466edf369c9851476b7ec7392f3dfd4cdc0b1)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
1. Remove debug output in iallgather (I have forgotten to remove it).
2. Remove an incorrect comment in description of ibcast
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit 64abd0f405be91b927cd8f37d30cdf41aa6685c2)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
An implementation of R. Rabenseifner's algorithm for MPI_Iallreduce.
This algorithm is a combination of a reduce-scatter implemented with recursive vector halving
and recursive distance doubling, followed either by an allgather.
Limitations:
-- count >= 2^{\floor{\log_2 p}}
-- commutative operations only
-- intra-communicators only
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit 73e048b62a92325fc3fca80c2ade5f5e9bf3192a)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
(cherry picked from commit 66182a294d5e8cf03a00fba579b05f59e764133c)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Implements recursive doubling algorithm for MPI_Iallgather.
The algorithm can be used only for power-of-two number of processes.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit a7386c1e09fb274991ca5b50d9d418a0d6b77b6c)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
An implementation of R. Rabenseifner's algorithm for MPI_Ireduce.
This algorithm is a combination of a reduce-scatter implemented with recursive vector halving
and recursive distance doubling, followed either by a gather.
Limitations:
-- count >= 2^{\floor{\log_2 p}}
-- commutative operations only
-- intra-communicators only
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit 7bd63e79c865080c801a45ed852602bdc4eb4d8f)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Remove dead code that was causing warnings about unused static
functions.
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
(cherry picked from commit 2e24e6ec082d29f76dcbc75c6f214d2d0d647701)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Implements recursive doubling algorithm for MPI_Iexscan.
The algorithm preserves order of operations so it can be used both
by commutative and non-commutative operations.
The MCA parameter 'coll_libnbc_iexscan_algorithm' was added for dynamic
algorithm selection.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit dfe203e167f5d8abc3b55226c6f17a468c9567dd)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Implements recursive doubling algorithm for MPI_Iscan. The algorithm preserves order of operations so it can be used both by commutative and non-commutative operations.
The MCA parameter coll_libnbc_iscan_algorithm was added for dynamic algorithm selection.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit 3d43ff0f3209d5bf4713c6696acb0acb8f1756e4)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
Gcc 8 identified hb_tree_csearch() as an infinite recursion, and it
turns out that we never call this function, anyway. So just remove
it.
Fixes#5670.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
(cherry picked from commit 06c1bf73da875f4a6449f38a993530d6fae7817d)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
always initialize 'size'.
Only the a2a_sched_diss() alltoall algorithm is impacted,
and this algo is currently unused, so there is no need
to backport nor update the NEWS file for now.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit ff48e9286430b37aac3146efe2b355f255db94d5)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
The call of MPI_Allgatherv with sendbuf and sendtype parameters equal to MPI_IN_PLACE and NULL correspondingly, produces the segmentation fault.
The problem is that sendtype is used even when sendbuf value is MPI_IN_PLACE. But according to the standard, sendtype and sendcount parameters should be ignored in this case.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit b45e190e6629b664872f7f872cfffd916180bb9a)
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
This work is rooted in the [MPI Forum issue
153](https://github.com/mpi-forum/mpi-issues/issues/153).
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
(cherry picked from commit 86acdee4606c1ac3b38070d1b7973a00a991f1d6)
This commit updates the coll/basic component to correctly order sends
and receives for cartesian communicators with cyclic boundaries. This
addresses an issue identified by mpi-forum/mpi-issues#153. This issue
occurs when the size in any dimension is 1. This gives the same
neighbor in the positive and negative directions. The old code was
sending and receiving in the same order so the -1 buffer contained
the +1 result and vise-versa. The problem is addressed by using
unique tags for each send. This should cover both the case where
overtaking is allowed and is not allowed. The former case will be
possible is a MPI_Cart_create_with_info() call is added to the
standard.
Signed-off-by: Nathan Hjelm <hjelmn@google.com>
(cherry picked from commit 196a91e604885d7aae9ac9dfbd9b2e846b3015b7)
open-mpi/ompi@0fe756d416 Introduced
a bug in coll/hcoll component. The ompi_requests allocated by
libhcoll would be treated as coll_base_nbc_request during
ompi_coll_base_retain_<> call. Afterwards this would lead to a
segv in the request cleanup.
Fix: since libhcoll interface does not distinguish between the
blocling/non-blocking requests use coll_base_nbc_request all the
time and initialize it properly in
coll/hcoll/get_coll_handle(). It is still within 2 cache lines.
Signed-off-by: Valentin Petrov <valentinp@mellanox.com>
a non blocking collective might return ompi_request_null, so we should not
retain anything in that case.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit open-mpi/ompi@63d3ccde9d)
Since ompi_coll_base_nbc_request_t is to be used in an
opal_free_list_t, it must be returned into a "clean" state.
So cleanup some data in the callback completion subroutines.
This fixes a regression introduced in open-mpi/ompi@0fe756d416
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit open-mpi/ompi@0862c409f1)
base ompi_coll_libnbc_request_t on top of ompi_coll_base_nbc_request_t
to correctly support the retention of datatypes/operators
This fixes a regression introduced in open-mpi/ompi@0fe756d416
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit open-mpi/ompi@f8eef0fde9)
Use linear with sync alltoall algorithm for certain message/comm size
ranges. Does not affect default fixed decision, unless HPCX (with its
custom parameters) is used or corresponding mca is set.
Signed-off-by: Mikhail Brinskii <mikhailb@mellanox.com>
(cherry picked from commit 404c4800688548b021bda68bdf10792424e6b1c5)
MPI standard states a user MPI_Op and/or user MPI_Datatype can be free'd
after a call to a non blocking collective and before the non-blocking
collective completes.
Retain user (only) MPI_Op and MPI_Datatype when the non blocking call is
invoked, and set a request callback so they are free'd when the MPI_Request
completes.
Thanks Thomas Ponweiser for reporting this
Fixesopen-mpi/ompi#2151Fixesopen-mpi/ompi#1304
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit open-mpi/ompi@0fe756d416)
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
Correctly bubble up errors in NBC collective operations
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
The error field of requests needs to be rearmed at start, not at create
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
(cherry picked from commit open-mpi/ompi@65660e5999)
If user sets HCOLL_EXTERNAL_UCM_EVENTS=1 then we try init opal
memory framework and register a mem release cb. Otherwise, rely on ucx.
Signed-off-by: Valentin Petrov <valentinp@mellanox.com>
PR #5450 addresses MPI_IN_PLACE processing for basic collective algorithms.
But in conjunction with that, we need to check for MPI_IN_PLACE in tuned paths
as well before calling ompi_datatype_type_size() as otherwise we segfault.
MPI spec also stipulates to ignore sendcount and sendtype for Alltoall and
Allgatherv operations. So, extending the check to these algorithms as well.
Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@intel.com>
(cherry picked from commit 88d781056f43934a93e16db556b340e72cdd3742)
The parameter passed to NBC_Return_handle() was incorrectly casted
and not dereferenced.
Thanks Yossi for the bug report.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
(cherry picked from commit open-mpi/ompi@8b51862fb2)
In the cleanup phase, it is possible for PtlMEUnlink() to return
PTL_IN_USE if the NIC is not done with the ME. This should not
be considered an error. This commit adds a retry loop around
PtlMEUnlink().
In some cases, the return value of PtlMEUnlink() and PtlCTFree()
was not checked at all. Check them with the same retry loop as
above.
Signed-off-by: Todd Kordenbrock <thkgcode@gmail.com>
(cherry picked from commit f3f2a826b40cc0d4a45a63614835162ec6eef78e)
The call of MPI_Allgather with sendbuf and sendtype parameters equal to MPI_IN_PLACE and NULL correspondingly, produces the segmentation fault.
The problem is that sendtype is used even when sendbuf value is MPI_IN_PLACE. But according to the standard, sendtype and sendcount parameters should be ignored in this case.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
(cherry picked from commit 540c2d1)
instead of invoking ompi_request_test_all(), that will end up
calling opal_progress() recursively, manually check the status
of the requests.
the same method is used in ompi_comm_request_progress()
Refs open-mpi/ompi#3901
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Use internal pack/unpack subroutines that operate on MPI_Aint instead of int
and hence solve some integer overflows.
Thanks Clyde Stanfield for reporting this issue.
Refs open-mpi/ompi#5383
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Current implementation of `coll/base/MPI_Scatter` is based on in-order binomial tree. This tree is right skewed and it provides good performance for a MPI_Gather operation. But for a MPI_Scatter operation left skewed binomial tree is effective.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
The call of MPI_Gather with sendbuf and sendtype parameters equal to MPI_IN_PLACE and NULL correspondingly, produces the segmentation fault in the root process.
The problem is that sendtype is used even when sendbuf value is MPI_IN_PLACE. But according to the standard (page 150, line 37), sendtype and sendcount parameters should be ignored in this case.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>