After long debugging, I found last week the reason this optimization originally broke
some hdf5 tests. We now pass the hdf5 test suite with the optimization being actively used.
Specifically:
- reduce the number of realloc's and malloc's by moving
some arrays out of the cycle loop, if we know that there
size is not changing
- store the rank of the aggregator in a separate variable to avoid
continuous dereferencing
- change the wait_all logic in write_all to use a fix number of requests
(even if they are MPI_REQUEST_NULL)
- fix the timing to considere the two initial allgather and the one
allgatherv operation to be a part of it
- add more comments.
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Bring Slurm PMI-1 component online
Bring the s2 component online
Little cleanup - let the various PMIx modules set the process name during init, and then just raise it up to the ORTE level. Required as the different PMI environments all pass the jobid in different ways.
Bring the OMPI pubsub/pmi component online
Get comm_spawn working again
Ensure we always provide a cpuset, even if it is NULL
pmix/cray: adjust cray pmix component for pmix
Make changes so cray pmix can work within the integrated
ompi/pmix framework.
Bring singletons back online. Implement the comm_spawn operation using pmix - not tested yet
Cleanup comm_spawn - procs now starting, error in connect_accept
Complete integration
In an abort situation, just bail out immediately -- don't try to
invoke any atexit()/on_exit()-registered functions.
This is similar rationale to
open-mpi/ompi@17846411c3.
Ensure to define ompi/pompi versions for platforms that don't have
weak symbols. Also make fortran/mpif-h/profile build a separate
sizeof library, just like fortran/mpifh-h does.
Per http://www.open-mpi.org/community/lists/devel/2015/08/17775.php,
some compilers don't like it when there's a .f90 file that only
contains comments (and no actual Fortran code). So if OMPI determines
that the Fortran compiler does not support enough Fortran mojo to
support MPI_SIZEOF, generate at least one dummy Fortran subroutine
that can be compiled in an otherwise barren Fortran landscape that is
devoid of life and hope.
This pull request adds an arraylist of type Buffer to
the Request class. Whenever a request object is created
that has associated buffers, the buffers should be added
to this array list so the java garbage collector does
not dispose of the buffers prematurely.
This is a more robust expansion on the idea first proposed by
@ggouaillardet
Fixes#369
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Portals4 supports atomic ops on datatypes less than or equal to
max_fetch_atomic_size bytes. This commit fixes a bug that required
the datatype to be less than max_fetch_atomic_size bytes.
Fortran uses objects (ompi_f08_mpi_comm_world, mpi_fortran_bottom,, ...) that are defined in C.
Some compilers have different requirements on how these objects should be aligned.
Smaller alignment in C can lead to several confusing warnings from the linker, so try to
find the alignment expected by Fortran compiler, and inform the C compiler.