currently, MPI fails with MPI_ERR_ARG. This is counter intuitive since
MPI_IN_PLACE is a legitimate parameter. MPI_IN_PLACE might not be correctly
implemented by all the non blocking modules (libnbc, ...) so fail with
MPI_ERR_INTERN for the time being.
Proposed extensions for Open MPI:
- If MPI_INITLIZED is invoked and MPI is only partially initialized,
wait until MPI is fully initialized before returning.
- If MPI_FINALIZED is invoked and MPI is only partially finalized,
wait until MPI is fully finalized before returning.
- If the ompi_mpix_allow_multi_init MCA param is true, allow MPI_INIT
and MPI_INIT_THREAD to be invoked multiple times without error (MPI
will be safely initialized only the first time it is invoked).
Add ompi_mpi_dynamics_disable() function to disable MPI dynamic
process functionality (i.e., such that if MPI_COMM_SPAWN/etc. are
invoked, you'll get a show_help error explaining that MPI dynamic
process functionality is disabled in this environment -- instead of a
potentially-cryptic network or hardware error).
Fixes#984
when profiling is built.
This prevents oshmem subroutines from being wrapped twice by third
party tools (e.g. once in oshmem and once in MPI)
see discussion starting at http://www.open-mpi.org/community/lists/devel/2015/08/17842.php
Thanks to Bert Wesarg for bringing this to our attention
This commit adds man pages for the MPI_Win_allocate and MPI_Win_allocated_shared
MPI-3 functions. The man page for MPI_Win_create has also been updated to
indicate support for the same_size and same_disp_unit info keys
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Bring Slurm PMI-1 component online
Bring the s2 component online
Little cleanup - let the various PMIx modules set the process name during init, and then just raise it up to the ORTE level. Required as the different PMI environments all pass the jobid in different ways.
Bring the OMPI pubsub/pmi component online
Get comm_spawn working again
Ensure we always provide a cpuset, even if it is NULL
pmix/cray: adjust cray pmix component for pmix
Make changes so cray pmix can work within the integrated
ompi/pmix framework.
Bring singletons back online. Implement the comm_spawn operation using pmix - not tested yet
Cleanup comm_spawn - procs now starting, error in connect_accept
Complete integration
Ensure to define ompi/pompi versions for platforms that don't have
weak symbols. Also make fortran/mpif-h/profile build a separate
sizeof library, just like fortran/mpifh-h does.
Per http://www.open-mpi.org/community/lists/devel/2015/08/17775.php,
some compilers don't like it when there's a .f90 file that only
contains comments (and no actual Fortran code). So if OMPI determines
that the Fortran compiler does not support enough Fortran mojo to
support MPI_SIZEOF, generate at least one dummy Fortran subroutine
that can be compiled in an otherwise barren Fortran landscape that is
devoid of life and hope.
This pull request adds an arraylist of type Buffer to
the Request class. Whenever a request object is created
that has associated buffers, the buffers should be added
to this array list so the java garbage collector does
not dispose of the buffers prematurely.
This is a more robust expansion on the idea first proposed by
@ggouaillardet
Fixes#369
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Fortran uses objects (ompi_f08_mpi_comm_world, mpi_fortran_bottom,, ...) that are defined in C.
Some compilers have different requirements on how these objects should be aligned.
Smaller alignment in C can lead to several confusing warnings from the linker, so try to
find the alignment expected by Fortran compiler, and inform the C compiler.
The libmpi_*.la fortran libraries make some direct calls to
libopen-pal.la. In many (most?) cases, having libmpi_* link
against libmpi is sufficient (because libmpi pulls in libopen-pal).
But when building RPMs on SLES, some compiler/linker flags are used
that seem to make this implicit linking not sufficient -- we get
missing opal symbols when creating libmpi_mpifh.la. So link in
open-pal directly (vs. indirectly), which solves the problem.
The prior code was checking string constants (which are #defines from
configure) against NULL. They can never be NULL, so the checks were
overly-defensive. If the preprocessor macros do not exist, we'll get
a different compiler error. So remove the dead code.
This fixes CID 72349.
A helper method in Request.java could cause a crash
if the request array that was passed contained nulls.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Added Cloneable to the implemented interface list as per
Coverity suggestion. The required methods were already
implemented, but it was not explicitly stated. This is
an intent revealing change.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Includes java bindings for MPI_GET_ELEMENTS_X and
MPI_STATUS_SET_ELEMENTS_X. This PR also adds the
Count object which represents MPI_Count.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
only define the unique fortran symbol depending on
- CAPS
- PLAIN
- SINGLE_UNDERSCORE
- DOUBLE_UNDERSCORE
and bind the f08 symbol to the uniquely defined C symbol.
Use real data structures to make the code simpler.
(perl script written by Jeff)