This commit adds a new MCA variable to set the location of the backing
store: osc_sm_backing_directory. The default on Linux has been
changed to use /dev/shm to improve performance in cases where /tmp is
not a tmpfs.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit adds a new MCA variable to set the location of the backing
store: osc_rdma_backing_directory. The default on Linux has been
changed to use /dev/shm to improve performance in cases where /tmp is
not a tmpfs.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Per discussion at
https://github.com/open-mpi/ompi/issues/2614#issuecomment-392815654,
do not allow for selection of the OSC PT2PT when creating an MPI RMA
window when THREAD_MULTIPLE is active. Print a helpful message and
return a not-supported error.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
(cherry picked from commit d0ffd660841623c02d1dfa3151e7f7afd3327698)
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Implements butterfly algorithm for MPI_Reduce_scatter_block.
The algorithm can be used both by commutative and non-commutative
operations, for power-of-two and non-power-of-two number of processes.
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
Remove the MXM MTL, which has been deprecated in preference for
the Yalla PML. This was discussed at the last developers meeting
and somehow I ended up with the action item to do the removal.
Signed-off-by: Brian Barrett <bbarrett@amazon.com>
The Java configury is split into two parts:
1. Determine if we want MPI Java bindings.
2. Find the Java compiler (and related).
This commit does a few things:
- Move the "Find the Java compiler" step from OPAL to OMPI (because
there is no Java in OPAL, and there doesn't appear to be any
immanent danger that there will be).
- As a direct consequence, remove the --enable-java CLI option
(--enable-mpi-java still remains). Enabling the MPI Java bindings
and enabling Java are now considered the same thing (since there
is no Java elsewhere in the code base, the different was
meaningless).
- Only invoke the "Find the Java compiler" step if we actually want
the MPI Java bindings.
- A few miscellaneous Java-related cleanups in configury (E.g., change
testing "$foo" == "1" to $foo -eq 1, etc.
This commit is mostly s/opal/ompi/gi in many places in configury and
shifting code around. But it looks bigger than it actually is because
of two reasons:
1. Some files were renamed:
* ompi_setup_java.m4 -> ompi_setup_mpi_java.m4 (setup MPI Java bindings)
* opal_setup_java.m4 -> ompi_setup_java.m4 (setup Java compiler)
2. Indenting level changed in (the new) ompi_setup_java.m4.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
- rename ompi_coll_base_reduce_scatter_block_basic to
more self descriptive ompi_coll_base_reduce_scatter_block_basic_linear
- fix the description of the coll_tuned_reduce_scatter_block_algorithm
MCA param
this fixes and documents previous open-mpi/ompi@0e8b35b615
MPI_Reduce_scatter_block used to be implemented by the coll/basic module only.
A new algo (recursive doubling) was recently introduced and can be used via the coll/tuned module,
but we never intended to make it the default algo.
In order to "restore" the previous default, the initial algo was moved from coll/basic to coll/base,
and is now used by default by coll/tuned.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The MPI spec defines that the "mpi" and "mpi_f08" module Fortran
bindings support passing by parameters by name. Hence, we need to use
the MPI-spec-defined parameter names ("dummy variables", in Fortran
parlance) for the "mpi" and "mpi_f08" modules.
Specifically, Fortran allows calls to procedures to be written with
keyword arguments, e.g., "call mpi_sizeof(x=x,size=rsize,ierror=ier)"
An "explicit interface" for the procedure must be in scope for this to
be allowed in a Fortran program unit. Therefore, the explicit
interface blocks we provide in the "mpi" and "mpi_f08" modules must
match the MPI published standard, including the names of the dummy
variables (i.e., parameter names), as that is how Fortran programs may
call them.
Note that we didn't find this issue previously because even though the
MPI spec *allows* for name-based parameter passing, not many people
actually use it. I suspect that we might have some more incorrect
parameter names -- we should probably do a full "mpi" / "mpi_f08"
module parameter name audit someday.
Thanks to Themos Tsikas for reporting the issue and supplying the
initial fix.
Signed-off-by: themos.tsikas@nag.co.uk
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The files array was also storing $phase.prof. This was leading to
$phase.prof's output getting dumped into itself again and again. Updated
code to initialise files array with files other than $phase.prof.
Signed-off-by: Ninad Prabhukhanolkar <ninadchess96@gmail.com>
Set mpi.lo dependencies outside of AM conditionals. Per
https://github.com/open-mpi/ompi/issues/5085, make mpi.lo depend on
whatever we decide the sizeof source files are (which may be empty, if
this compiler does not support the Right Stuff for MPI_SIZEOF, or it
may be mpi-tkr-sizeof.[h|f90]).
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
whenever possible.
Add the missing OMPI_FORTRAN_BUILD_SIZEOF macro to Fortran
and add a missing dependency.
Thanks Themos Tsikas for reporting this issue.
Refs open-mpi/ompi#5085
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
keep track of the sizeof the blocklen_per_process and displs_per_process
on the aggregator datastructure to minimze the number of realloc function
calls required in the shuffle_init operation.
Signed-off-by: raafatfeki <fekiraafat@gmail.com>
This commit attempts to update the romio io component to not use
functions removed in MPI-3.0 (2012). This is a first cut and will
probably need to be reviewed for correctness.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
romio assumes that all predefined datatypes are contiguous. Because of
the (terribly named) composed datatypes MPI_SHORT_INT, MPI_DOUBLE_INT,
MPI_LONG_INT, etc this is an incorrect assumption. The simplest way to
fix this is to override the MPI_Type_get_envelope and
MPI_Type_get_contents calls with calls that will work on these
datatypes. Note that not all calls to these MPI functions are
replaced, only the ones used when flattening a non-contiguous
datatype.
References #5009
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>