As of v15.7, the PGI Fortran compiler does not properly support how
Open MPI uses the "USE ... ONLY" Fortran syntax to include modules
with conflicting symbol definitions (interestingly, pgfortran only has
a problem with this when compiling with -g).
In short, OMPI uses "USE :: module_aaa, ONLY: foo" and "USE ::
module_bbb, ONLY: bar" to use modules aaa and bbb, even though they
contain conflicting definitions for some symbols. However, the use of
the ONLY clause should preclude the inclusion of the conflicting
symbols -- as the word implies, it should direct the compiler to
*only* use the symbols identified by the clause (i.e., foo and bar, in
this example).
This commit adds a configure test for this capability. If the
compiler fails to build a simple test that mimics this behavior, then
disable the mpi_f08 bindings.
Fixesopen-mpi/ompi#857
After long debugging, I found last week the reason this optimization originally broke
some hdf5 tests. We now pass the hdf5 test suite with the optimization being actively used.
Specifically:
- reduce the number of realloc's and malloc's by moving
some arrays out of the cycle loop, if we know that there
size is not changing
- store the rank of the aggregator in a separate variable to avoid
continuous dereferencing
- change the wait_all logic in write_all to use a fix number of requests
(even if they are MPI_REQUEST_NULL)
- fix the timing to considere the two initial allgather and the one
allgatherv operation to be a part of it
- add more comments.
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Bring Slurm PMI-1 component online
Bring the s2 component online
Little cleanup - let the various PMIx modules set the process name during init, and then just raise it up to the ORTE level. Required as the different PMI environments all pass the jobid in different ways.
Bring the OMPI pubsub/pmi component online
Get comm_spawn working again
Ensure we always provide a cpuset, even if it is NULL
pmix/cray: adjust cray pmix component for pmix
Make changes so cray pmix can work within the integrated
ompi/pmix framework.
Bring singletons back online. Implement the comm_spawn operation using pmix - not tested yet
Cleanup comm_spawn - procs now starting, error in connect_accept
Complete integration
In an abort situation, just bail out immediately -- don't try to
invoke any atexit()/on_exit()-registered functions.
This is similar rationale to
open-mpi/ompi@17846411c3.
Ensure to define ompi/pompi versions for platforms that don't have
weak symbols. Also make fortran/mpif-h/profile build a separate
sizeof library, just like fortran/mpifh-h does.
Per http://www.open-mpi.org/community/lists/devel/2015/08/17775.php,
some compilers don't like it when there's a .f90 file that only
contains comments (and no actual Fortran code). So if OMPI determines
that the Fortran compiler does not support enough Fortran mojo to
support MPI_SIZEOF, generate at least one dummy Fortran subroutine
that can be compiled in an otherwise barren Fortran landscape that is
devoid of life and hope.
This pull request adds an arraylist of type Buffer to
the Request class. Whenever a request object is created
that has associated buffers, the buffers should be added
to this array list so the java garbage collector does
not dispose of the buffers prematurely.
This is a more robust expansion on the idea first proposed by
@ggouaillardet
Fixes#369
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Portals4 supports atomic ops on datatypes less than or equal to
max_fetch_atomic_size bytes. This commit fixes a bug that required
the datatype to be less than max_fetch_atomic_size bytes.
Fortran uses objects (ompi_f08_mpi_comm_world, mpi_fortran_bottom,, ...) that are defined in C.
Some compilers have different requirements on how these objects should be aligned.
Smaller alignment in C can lead to several confusing warnings from the linker, so try to
find the alignment expected by Fortran compiler, and inform the C compiler.
The libmpi_*.la fortran libraries make some direct calls to
libopen-pal.la. In many (most?) cases, having libmpi_* link
against libmpi is sufficient (because libmpi pulls in libopen-pal).
But when building RPMs on SLES, some compiler/linker flags are used
that seem to make this implicit linking not sufficient -- we get
missing opal symbols when creating libmpi_mpifh.la. So link in
open-pal directly (vs. indirectly), which solves the problem.
On some OSs (e.g., Ubuntu 14.04.2 LTS), the linker is configured such
that the symbols of library dependencies are not available to the
application. Hence, we need to explicitly list such dependencies when
creating the executable.
For this commit, these tests are use OPAL function calls, so we must
explicitly link in libopen-pal.so.
- make the internal structure follow the Open MPI naming convention
- provide a single flag/macro which controls the compilation/utilization of this
feature, to avoid that somebody using this has to modify every single
fcoll component. A configure option could be added later if desired.
configury: fix hcoll, fca and mxm detection and revamp yalla Makefile.am
Thanks to David Shrader and Ake Sandgren for bringing this issue to our attention
* do not add -I/.../include/fca -I /.../include/fca_core to CPPFLAGS
* allow configure --with-fca
* search fca libs in both DIR/lib and DIR/lib64
* fix the description of the --with-fca option
* do not add -I/.../include/hcoll -I /.../include/hcoll/api to CPPFLAGS
* allow configure --with-hcoll
* search hcoll libs in both DIR/lib and DIR/lib64
* fix the description of the --with-hcoll option