Since OS X 10.11 (aka El Capitan) DYLD_LIBRARY_PATH is no more
propagated to children, so try to dlopen libmpi with the full path
using the directory of libmpi_java
Fixesopen-mpi/ompi#1220
Thanks Alexander Daryin for reporting this
Remove excessive parameter check to avoid premature exit from the collective.
MPI standard says:
The type signature associated with sendcount, sendtype, at a process must be equal to
the type signature associated with recvcount, recvtype at any other process. This implies
that the amount of data sent must be equal to the amount of data received, pairwise between
every pair of processes.
In case of inter-communicator we have 2 group of processes and "left" group may call
MPI_Alltoall(NULL, 0, MPI_INT, buf, 10, MPI_INT, comm, ...);
and the right one:
MPI_Alltoall(buf,10,MPI_INT, NULL, 0, MPI_INT, comm, ...);
And it would be legal though one of the group will receive 0 bytes from others.
This was triggered by MPICH/coll test called icalltoall.
Instead of solely relying on the out value definitions in
MPI_Waitsome.3, explicitly copy this text here.
Note that the original text in this man page was copied verbatim from
the MPI spec; we've now added a bit more text (copied from
MPI_Waitsome.3in) that explains the out values so that users don't
have to cross-reference to another man page.
Thanks to Eric Schnetter for the suggestion.
Fixesopen-mpi/ompi#1153
This commit changes the way ompi_proc_t's are retained/released by
ompi_group_t's. Before this change ompi_proc_t's were retained once
for the group and then once for each retain of a group. This method
adds unnecessary overhead (need to traverse the group list each time
the group is retained) and causes problems when using an async
add_procs.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Without this modification, gfortran throw the following error
if these variables are used for `MPI_DIST_GRAPH_CREATE_ADJACENT` or
`MPI_DIST_GRAPH_CREATE_ADJACENT`.
Error: There is no specific subroutine for the generic
'mpi_dist_graph_create_adjacent' at (1)
When using dynamic memory windows the displacement becomes a
pointer. Since the high bit may be set on valid pointers on some
platforms the check for disp > 0 is invalid. This commit adds the
window flavor to ompi_win_t and disables the displacement check when
operating on dynamic memory windows.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
currently, MPI fails with MPI_ERR_ARG. This is counter intuitive since
MPI_IN_PLACE is a legitimate parameter. MPI_IN_PLACE might not be correctly
implemented by all the non blocking modules (libnbc, ...) so fail with
MPI_ERR_INTERN for the time being.
Proposed extensions for Open MPI:
- If MPI_INITLIZED is invoked and MPI is only partially initialized,
wait until MPI is fully initialized before returning.
- If MPI_FINALIZED is invoked and MPI is only partially finalized,
wait until MPI is fully finalized before returning.
- If the ompi_mpix_allow_multi_init MCA param is true, allow MPI_INIT
and MPI_INIT_THREAD to be invoked multiple times without error (MPI
will be safely initialized only the first time it is invoked).
Add ompi_mpi_dynamics_disable() function to disable MPI dynamic
process functionality (i.e., such that if MPI_COMM_SPAWN/etc. are
invoked, you'll get a show_help error explaining that MPI dynamic
process functionality is disabled in this environment -- instead of a
potentially-cryptic network or hardware error).
Fixes#984
when profiling is built.
This prevents oshmem subroutines from being wrapped twice by third
party tools (e.g. once in oshmem and once in MPI)
see discussion starting at http://www.open-mpi.org/community/lists/devel/2015/08/17842.php
Thanks to Bert Wesarg for bringing this to our attention
This commit adds man pages for the MPI_Win_allocate and MPI_Win_allocated_shared
MPI-3 functions. The man page for MPI_Win_create has also been updated to
indicate support for the same_size and same_disp_unit info keys
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Bring Slurm PMI-1 component online
Bring the s2 component online
Little cleanup - let the various PMIx modules set the process name during init, and then just raise it up to the ORTE level. Required as the different PMI environments all pass the jobid in different ways.
Bring the OMPI pubsub/pmi component online
Get comm_spawn working again
Ensure we always provide a cpuset, even if it is NULL
pmix/cray: adjust cray pmix component for pmix
Make changes so cray pmix can work within the integrated
ompi/pmix framework.
Bring singletons back online. Implement the comm_spawn operation using pmix - not tested yet
Cleanup comm_spawn - procs now starting, error in connect_accept
Complete integration
Ensure to define ompi/pompi versions for platforms that don't have
weak symbols. Also make fortran/mpif-h/profile build a separate
sizeof library, just like fortran/mpifh-h does.
Per http://www.open-mpi.org/community/lists/devel/2015/08/17775.php,
some compilers don't like it when there's a .f90 file that only
contains comments (and no actual Fortran code). So if OMPI determines
that the Fortran compiler does not support enough Fortran mojo to
support MPI_SIZEOF, generate at least one dummy Fortran subroutine
that can be compiled in an otherwise barren Fortran landscape that is
devoid of life and hope.
This pull request adds an arraylist of type Buffer to
the Request class. Whenever a request object is created
that has associated buffers, the buffers should be added
to this array list so the java garbage collector does
not dispose of the buffers prematurely.
This is a more robust expansion on the idea first proposed by
@ggouaillardet
Fixes#369
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Fortran uses objects (ompi_f08_mpi_comm_world, mpi_fortran_bottom,, ...) that are defined in C.
Some compilers have different requirements on how these objects should be aligned.
Smaller alignment in C can lead to several confusing warnings from the linker, so try to
find the alignment expected by Fortran compiler, and inform the C compiler.
The libmpi_*.la fortran libraries make some direct calls to
libopen-pal.la. In many (most?) cases, having libmpi_* link
against libmpi is sufficient (because libmpi pulls in libopen-pal).
But when building RPMs on SLES, some compiler/linker flags are used
that seem to make this implicit linking not sufficient -- we get
missing opal symbols when creating libmpi_mpifh.la. So link in
open-pal directly (vs. indirectly), which solves the problem.
The prior code was checking string constants (which are #defines from
configure) against NULL. They can never be NULL, so the checks were
overly-defensive. If the preprocessor macros do not exist, we'll get
a different compiler error. So remove the dead code.
This fixes CID 72349.
A helper method in Request.java could cause a crash
if the request array that was passed contained nulls.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Added Cloneable to the implemented interface list as per
Coverity suggestion. The required methods were already
implemented, but it was not explicitly stated. This is
an intent revealing change.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Includes java bindings for MPI_GET_ELEMENTS_X and
MPI_STATUS_SET_ELEMENTS_X. This PR also adds the
Count object which represents MPI_Count.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
only define the unique fortran symbol depending on
- CAPS
- PLAIN
- SINGLE_UNDERSCORE
- DOUBLE_UNDERSCORE
and bind the f08 symbol to the uniquely defined C symbol.
Use real data structures to make the code simpler.
(perl script written by Jeff)
There are a few places where adding the @param for the variable
javadoc wants does not make sense, so I added suppression statements
in those areas.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Bindings for the MPI_WIN_FLUSH_LOCAL, MPI_WIN_FLUSH_LOCAL_ALL, MPI_WIN_ALLOCATE, MPI_WIN_ALLOCATE_SHARED, and MPI_COMM_SPLIT_TYPE. Also added several necessary constants.
Signed-off-by: Nathaniel Graham ngraham@lanl.gov