Always use size_t (instead of converting to an uint32_t) in order to
correctly support large datatypes.
Thanks Ben Menadue for the initial bug report
Refs open-mpi/ompi#6016
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Though not a recommended configuration it is possible to use Open MPI
over UCX over uGNI. This configuration had some issues related to the
connection management and tl selection. This commit fixes those
issues.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The intention of lowering the priority when all processes are local
was to favor Vader BTL. However, in builds including the OFI MTL it
gets selected instead.
Reviewed-by: Spruit, Neil R <neil.r.spruit@intel.com>
Reviewed-by: Gopalakrishnan, Aravind <aravind.gopalakrishnan@intel.com>
Signed-off-by: Matias Cabral <matias.a.cabral@intel.com>
OFI MTL supports OFI Scalable Endpoints feature as means to improve
multi-threaded application throughput and message rate. Currently the feature
is designed to utilize multiple TX/RX contexts exposed by the OFI provider in
conjunction with a multi-communicator MPI application model. For more
information, refer to README under mtl/ofi.
Reviewed-by: Matias Cabral <matias.a.cabral@intel.com>
Reviewed-by: Neil Spruit <neil.r.spruit@intel.com>
Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@intel.com>
If UCX is available, then pml/ucx will be used instead of
pml/ob1 + btl/openib, so there is no need to warn about
btl/openib not supporting Infiniband.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Add the --pset option for app_contexts so the user can provide a string
name for each app_context. Use the new PMIx pset attribute to store the
names in the PMIx local storage for retrieval
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
move openmpi/ompi/mpiext/FOO/c/mpiext_FOO_c.h to
openmpi/ompi/mpiext/FOO_c.h in order to use consistent
paths with mpif.h extensions
Refs. open-mpi/ompi#6019
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
in order to cope with the 72 characters per line limit, move
openmpi/ompi/mpiext/FOO/mpif-h/mpiext_FOO_mpifh.h to
openmpi/ompi/mpiext/FOO_mpifh.h
Refs. open-mpi/ompi#6019
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The feature of persistent collectives is approved in the Sept. 2018
MPI Forum meeting and 2018 Draft Specification of the MPI standard is
published during SC18.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
prefer #include vs include in order to correctly handle long Fortran lines.
We use the full path, and it can be very long, this is why
it cannot be passed to the Fortran compiler.
Thanks Igor Andriyash and Axel Huebl for reporting this issue.
Refs open-mpi/ompi#6106
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
without this fix, an error handler invoked on pml_ucx request would
segfault while trying to dereference requests[i]->req_mpi_object.comm
Signed-off-by: Yossi Itigin <yosefe@mellanox.com>
- according spec 1.4, annex C shmem collectives should process
calls where number of elements is zero independently from pointer
value
- added zero-count processing - it just call barrier to
sync ranks
Signed-off-by: Sergey Oblomov <sergeyo@mellanox.com>