3.1.0 has now shipped, so update the NEWS file in master with
the items from 3.1.0 and prune the Master list as needed.
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
That PR accidentally changed Open MPI's build configuration infrastruc-
ture's Java toolchain detection logic so that it would, as reported by @bosilca
in https://github.com/open-mpi/ompi/pull/5001#issuecomment-387803012 and tracked down by me in https://github.com/open-mpi/ompi/pull/5001#issuecomment-387851005, abort your entire
in-progress Open MPI build when it failed to find an OS X/macOS JDK instead of
simply falling back to checking for a JDK in locations where it would be found
on other platforms. _Oops…!_
Signed-off-by: Bryce Glover <RandomDSdevel@gmail.com>
- rename ompi_coll_base_reduce_scatter_block_basic to
more self descriptive ompi_coll_base_reduce_scatter_block_basic_linear
- fix the description of the coll_tuned_reduce_scatter_block_algorithm
MCA param
this fixes and documents previous open-mpi/ompi@0e8b35b615
MPI_Reduce_scatter_block used to be implemented by the coll/basic module only.
A new algo (recursive doubling) was recently introduced and can be used via the coll/tuned module,
but we never intended to make it the default algo.
In order to "restore" the previous default, the initial algo was moved from coll/basic to coll/base,
and is now used by default by coll/tuned.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The MPI spec defines that the "mpi" and "mpi_f08" module Fortran
bindings support passing by parameters by name. Hence, we need to use
the MPI-spec-defined parameter names ("dummy variables", in Fortran
parlance) for the "mpi" and "mpi_f08" modules.
Specifically, Fortran allows calls to procedures to be written with
keyword arguments, e.g., "call mpi_sizeof(x=x,size=rsize,ierror=ier)"
An "explicit interface" for the procedure must be in scope for this to
be allowed in a Fortran program unit. Therefore, the explicit
interface blocks we provide in the "mpi" and "mpi_f08" modules must
match the MPI published standard, including the names of the dummy
variables (i.e., parameter names), as that is how Fortran programs may
call them.
Note that we didn't find this issue previously because even though the
MPI spec *allows* for name-based parameter passing, not many people
actually use it. I suspect that we might have some more incorrect
parameter names -- we should probably do a full "mpi" / "mpi_f08"
module parameter name audit someday.
Thanks to Themos Tsikas for reporting the issue and supplying the
initial fix.
Signed-off-by: themos.tsikas@nag.co.uk
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The files array was also storing $phase.prof. This was leading to
$phase.prof's output getting dumped into itself again and again. Updated
code to initialise files array with files other than $phase.prof.
Signed-off-by: Ninad Prabhukhanolkar <ninadchess96@gmail.com>
This fixes a problem reported by @bgoglin where rank-by was incorrectly generating values when ranking by a type of object (e.g., socket). It also corrects the handling of the pernode, npernode, and npersocket options - these should only set the #procs and the default mapping pattern. They specifically should not prohibit the user from requesting a different mapping.
Thus, the following should be valid:
mpirun -npernode 2 --map-by socket ...
should put 2 procs on each node, mapping them by-socket on each node.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>