As we changed the ABI (forcing a major release), we can limit
the size of the predefined communicators by moving the collective
structure outside the communicator. This might have a minimal,
but unnoticeable, impact on performance. This approach has been
discussed during the January 2017 devel meeting.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
This commit add the following Fortran named constants which are
defined in the MPI standard but are missing in Open MPI.
- `MPI_LONG_LONG` (defined as a synonym of `MPI_LONG_LONG_INT`)
- `MPI_CXX_FLOAT_COMPLEX`
- `MPI_C_BOOL`
And this commit also changes the value of the following Fortran
named constant for consistency.
- `MPI_C_COMPLEX`
`(MPI_C_FLOAT_COMPLEX` is defined as a synonym of this)
Each needs a different solution described below.
For `MPI_LONG_LONG`:
The value of `MPI_LONG_LONG` is defined to have a same value
as `MPI_LONG_LONG_INT` because of the following reasons.
1. It is defined as a synonym of `MPI_LONG_LONG_INT` in
the MPI standard.
2. `MPI_LONG_LONG_INT` and `MPI_LONG_LONG` has a same value
for C in `mpi.h`.
3. `ompi_mpi_long_long` is not defined in
`ompi/datatype/ompi_datatype_module.c`.
For `MPI_CXX_FLOAT_COMPLEX`:
Existing `MPI_CXX_COMPLEX` is replaced with `MPI_CXX_FLOAT_COMPLEX`
bacause `MPI_CXX_FLOAT_COMPLEX` is the right name defined in MPI-3.1
and `MPI_CXX_COMPLEX` is not defined in MPI-3.1 (nor older).
But for compatibility, `MPI_CXX_COMPLEX` is treated as a synonym
of `MPI_CXX_FLOAT_COMPLEX` on Open MPI.
For `MPI_C_BOOL`:
`MPI_C_BOOL` is newly added. The value which `MPI_C_COMPLEX` had
used (68) is assinged for it because the value becomes no longer
in use (described later) and it is a suited position as a datatype
added on MPI-2.2.
For `MPI_C_COMPLEX`:
Existing `MPI_C_FLOAT_COMPLEX` is replaced with `MPI_C_COMPLEX`
and `MPI_C_FLOAT_COMPLEX` is changed to have the same value.
In other words, make `MPI_C_COMPLEX` the canonical name and
make `MPI_C_FLOAT_COMPLEX` an alias of it.
This is bacause the relation of these datatypes is same as
the relation of `MPI_LONG_LONG_INT` and `MPI_LONG_LONG`, and
`MPI_LONG_LONG_INT` and `MPI_LONG_LONG` are implemented like that.
But in the datatype engine, we use `ompi_mpi_c_float_complex`
instead of `ompi_mpi_c_complex` as a variable name to keep
the consistency with the other similar types such as
`ompi_mpi_c_double_complex` (see George's comment in open-mpi/ompi#1927).
We don't delete `ompi_mpi_c_complex` now because it is used in
some other places in Open MPI code. It may be cleand up in the future.
In addition, `MPI_CXX_COMPLEX`, which was defined only in the Open MPI
Fortran binding, is added to `mpi.h` for the C binding.
This commit breaks binary compatibility of Fortran `MPI_C_COMPLEX`.
When this commit is merged into v2.x branch, the change of
`MPI_C_COMPLEX` should be excluded.
Add PMIx 2.0
Remove PMIx 1.1.4
Cleanup copying of component
Add missing file
Touchup a typo in the Makefile.am
Update the pmix ext114 component
Minor cleanups and resync to master
Update to latest PMIx 2.x
Update to the PMIx event notification branch latest changes
Without this modification, gfortran throw the following error
if these variables are used for `MPI_DIST_GRAPH_CREATE_ADJACENT` or
`MPI_DIST_GRAPH_CREATE_ADJACENT`.
Error: There is no specific subroutine for the generic
'mpi_dist_graph_create_adjacent' at (1)
`MPI_ARGVS_NULL` should be a two-dimensional array.
Without this modification, gfortran throw the following error
if `MPI_ARGVS_NULL` is used for `MPI_COMM_SPAWN_MULTIPLE`.
Error: There is no specific subroutine for the generic
'mpi_comm_spawn_multiple' at (1)
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
only define the unique fortran symbol depending on
- CAPS
- PLAIN
- SINGLE_UNDERSCORE
- DOUBLE_UNDERSCORE
and bind the f08 symbol to the uniquely defined C symbol.
Use real data structures to make the code simpler.
(perl script written by Jeff)
The definition of MPI_T_pvar_get_index was incorrect. This commit
fixes the definition and adds a missing return code.
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
We've seen this a few times (e.g.,
http://www.open-mpi.org/community/lists/users/2015/06/27057.php
reported via @siegmargross). I'm not entirely sure why it happens --
the best I can come up with is a poorly-synchronized network
filesystem and/or a bug in "make". For example: this code hasn't
changed in forever, and it only happens to users *sometimes*.
Regardless, avoid the error altogether by removing the file before
making the sym link (it should be a sym link anyway -- if there's
something there, it should be safe to remove it before we re-create
the sym link that should be there in the first place).
(cherry picked from commit 0edd265ea045e649c9489e3cb8fdb657800d95c3)
This commit adds support for MPI_Aint_add and MPI_Aint_diff. These
functions are implemented as macros in C (explicitly allowed by
MPI-3.1). The fortran implementations are a similar mess to the
MPI_Wtime implementations.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This is the master version of @ggouaillardet's patch from
open-mpi/ompi-release#148 (there was a minor conflict to fix and
several fuzzings of line numbers).
Using the underlying hardware identification to split
communicators based on locality has been enabled using
the MPI_Comm_Split_Type function.
Currently implemented split's are:
HWTHREAD
CORE
L1CACHE
L2CACHE
L3CACHE
SOCKET
NUMA
NODE
BOARD
HOST
CU
CLUSTER
However only NODE is defined in the standard which is why the
remaning splits are referred to using the OMPI_ prefix instead
of the standard MPI_ prefix.
I have tested this using --without-hwloc and --with-hwloc=<path>
which both give the same output.
NOTE: I think something fishy is going on in the locality operators.
In my test-program I couldn't get the correct split on these requests:
NUMA, SOCKET, L3CACHE
where I suspected a full communicator but only got one.
What started as a simple ticket ended up reaching the way up to the
MPI Forum.
It turns out that we are supposed to have MPI_SIZEOF for all Fortran
interfaces: mpif.h, the mpi module, and the mpi_f08 module.
It further turns out that to properly support MPI_SIZEOF, your Fortran
compiler *has* support the INTERFACE keyword and ISO_FORTRAN_ENV. We
can't use "ignore TKR" functionality, because the whole point of
MPI_SIZEOF is that the implementation knows what type was passed to it
("ignore TKR" functionality, by definition, throws that information
away). Hence, we have to have an MPI_SIZEOF interface+implementation
for all intrinsic types, kinds, and ranks.
This commit therefore adds a perl script that generates both the
interfaces and implementations for MPI_SIZEOF in each of mpif.h, the
mpi module, and mpi_f08 module (yay consolidation!).
The perl script uses the results of some new configure tests:
* check if the Fortran compiler supports the INTERFACE keyword
* check if the Fortran compiler supports ISO_FORTRAN_ENV
* find the max array rank (i.e., dimension) that the compiler supports
If the Fortran compiler supports both INTERFACE and ISO_FORTRAN_ENV,
then we'll build the MPI_SIZEOF interfaces. If not, we'll skip
MPI_SIZEOF in mpif.h and the mpi module. Note that we won't build the
mpi_f08 module -- to include the MPI_SIZEOF interfaces -- if the
Fortran compiler doesn't support INTERFACE, ISO_FORTRAN_ENV, and a
whole bunch of ther modern Fortran stuff.
Since MPI_SIZEOF interfaces are now generated by the perl script, this
commit also removes all the old MPI_SIZEOF implementations (which were
laden with a zillion #if blocks).
cmr=v1.8.3
This commit was SVN r32764.
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.