* Don't overflow the internal datatype count.
Change the type of the count to be a size_t (it does not alter the total
size of the internal structures, so has no impact on the ABI).
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Optimize the datatype creation.
The internal array of counts of predefined types is now only created
when needed, which is either in a heterogeneous environment, or when
one call get_elements. It saves space and makes the convertor creation a
little faster in some cases.
Rearrange the fields in the datatype description structs.
The macro OPAL_DATATYPE_INIT_PTYPES_ARRAY had a bug, and the
static array was only partially created. All predefined types should
have the ptypes array created and initialized.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Fix the boundary computation.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* test/datatype: add test for short unpack on heteregeneous cluster
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Trying to reduce the cost of creating a convertor.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Respect the unpack boundaries.
As Gilles suggested on #2535 the opal_unpack_general_function was
unpacking based on the requested count and not on the amount of packed
data provided.
Fixes#2535.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
MPI_AINT_ADD and MPI_AINT_DIFF are functions and must be declared as
externals with the proper return type. This is already done properly
in the mpi and mpi_f08 modules; these declarations for these functions
were only missing from mpif.h (i.e., mpif-externals.h).
Thanks to Aboorva Devarajan (@AboorvaDevarajan) for the bug report.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
since Open MPI now requires a C99, and ptrdiff_t type is part of C99,
there is no more need for the abstract OPAL_PTRDIFF_TYPE type.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
As we changed the ABI (forcing a major release), we can limit
the size of the predefined communicators by moving the collective
structure outside the communicator. This might have a minimal,
but unnoticeable, impact on performance. This approach has been
discussed during the January 2017 devel meeting.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
This commit add the following Fortran named constants which are
defined in the MPI standard but are missing in Open MPI.
- `MPI_LONG_LONG` (defined as a synonym of `MPI_LONG_LONG_INT`)
- `MPI_CXX_FLOAT_COMPLEX`
- `MPI_C_BOOL`
And this commit also changes the value of the following Fortran
named constant for consistency.
- `MPI_C_COMPLEX`
`(MPI_C_FLOAT_COMPLEX` is defined as a synonym of this)
Each needs a different solution described below.
For `MPI_LONG_LONG`:
The value of `MPI_LONG_LONG` is defined to have a same value
as `MPI_LONG_LONG_INT` because of the following reasons.
1. It is defined as a synonym of `MPI_LONG_LONG_INT` in
the MPI standard.
2. `MPI_LONG_LONG_INT` and `MPI_LONG_LONG` has a same value
for C in `mpi.h`.
3. `ompi_mpi_long_long` is not defined in
`ompi/datatype/ompi_datatype_module.c`.
For `MPI_CXX_FLOAT_COMPLEX`:
Existing `MPI_CXX_COMPLEX` is replaced with `MPI_CXX_FLOAT_COMPLEX`
bacause `MPI_CXX_FLOAT_COMPLEX` is the right name defined in MPI-3.1
and `MPI_CXX_COMPLEX` is not defined in MPI-3.1 (nor older).
But for compatibility, `MPI_CXX_COMPLEX` is treated as a synonym
of `MPI_CXX_FLOAT_COMPLEX` on Open MPI.
For `MPI_C_BOOL`:
`MPI_C_BOOL` is newly added. The value which `MPI_C_COMPLEX` had
used (68) is assinged for it because the value becomes no longer
in use (described later) and it is a suited position as a datatype
added on MPI-2.2.
For `MPI_C_COMPLEX`:
Existing `MPI_C_FLOAT_COMPLEX` is replaced with `MPI_C_COMPLEX`
and `MPI_C_FLOAT_COMPLEX` is changed to have the same value.
In other words, make `MPI_C_COMPLEX` the canonical name and
make `MPI_C_FLOAT_COMPLEX` an alias of it.
This is bacause the relation of these datatypes is same as
the relation of `MPI_LONG_LONG_INT` and `MPI_LONG_LONG`, and
`MPI_LONG_LONG_INT` and `MPI_LONG_LONG` are implemented like that.
But in the datatype engine, we use `ompi_mpi_c_float_complex`
instead of `ompi_mpi_c_complex` as a variable name to keep
the consistency with the other similar types such as
`ompi_mpi_c_double_complex` (see George's comment in open-mpi/ompi#1927).
We don't delete `ompi_mpi_c_complex` now because it is used in
some other places in Open MPI code. It may be cleand up in the future.
In addition, `MPI_CXX_COMPLEX`, which was defined only in the Open MPI
Fortran binding, is added to `mpi.h` for the C binding.
This commit breaks binary compatibility of Fortran `MPI_C_COMPLEX`.
When this commit is merged into v2.x branch, the change of
`MPI_C_COMPLEX` should be excluded.
Add PMIx 2.0
Remove PMIx 1.1.4
Cleanup copying of component
Add missing file
Touchup a typo in the Makefile.am
Update the pmix ext114 component
Minor cleanups and resync to master
Update to latest PMIx 2.x
Update to the PMIx event notification branch latest changes
Without this modification, gfortran throw the following error
if these variables are used for `MPI_DIST_GRAPH_CREATE_ADJACENT` or
`MPI_DIST_GRAPH_CREATE_ADJACENT`.
Error: There is no specific subroutine for the generic
'mpi_dist_graph_create_adjacent' at (1)
`MPI_ARGVS_NULL` should be a two-dimensional array.
Without this modification, gfortran throw the following error
if `MPI_ARGVS_NULL` is used for `MPI_COMM_SPAWN_MULTIPLE`.
Error: There is no specific subroutine for the generic
'mpi_comm_spawn_multiple' at (1)
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
only define the unique fortran symbol depending on
- CAPS
- PLAIN
- SINGLE_UNDERSCORE
- DOUBLE_UNDERSCORE
and bind the f08 symbol to the uniquely defined C symbol.
Use real data structures to make the code simpler.
(perl script written by Jeff)
The definition of MPI_T_pvar_get_index was incorrect. This commit
fixes the definition and adds a missing return code.
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
We've seen this a few times (e.g.,
http://www.open-mpi.org/community/lists/users/2015/06/27057.php
reported via @siegmargross). I'm not entirely sure why it happens --
the best I can come up with is a poorly-synchronized network
filesystem and/or a bug in "make". For example: this code hasn't
changed in forever, and it only happens to users *sometimes*.
Regardless, avoid the error altogether by removing the file before
making the sym link (it should be a sym link anyway -- if there's
something there, it should be safe to remove it before we re-create
the sym link that should be there in the first place).
(cherry picked from commit 0edd265ea045e649c9489e3cb8fdb657800d95c3)
This commit adds support for MPI_Aint_add and MPI_Aint_diff. These
functions are implemented as macros in C (explicitly allowed by
MPI-3.1). The fortran implementations are a similar mess to the
MPI_Wtime implementations.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This is the master version of @ggouaillardet's patch from
open-mpi/ompi-release#148 (there was a minor conflict to fix and
several fuzzings of line numbers).
Using the underlying hardware identification to split
communicators based on locality has been enabled using
the MPI_Comm_Split_Type function.
Currently implemented split's are:
HWTHREAD
CORE
L1CACHE
L2CACHE
L3CACHE
SOCKET
NUMA
NODE
BOARD
HOST
CU
CLUSTER
However only NODE is defined in the standard which is why the
remaning splits are referred to using the OMPI_ prefix instead
of the standard MPI_ prefix.
I have tested this using --without-hwloc and --with-hwloc=<path>
which both give the same output.
NOTE: I think something fishy is going on in the locality operators.
In my test-program I couldn't get the correct split on these requests:
NUMA, SOCKET, L3CACHE
where I suspected a full communicator but only got one.
What started as a simple ticket ended up reaching the way up to the
MPI Forum.
It turns out that we are supposed to have MPI_SIZEOF for all Fortran
interfaces: mpif.h, the mpi module, and the mpi_f08 module.
It further turns out that to properly support MPI_SIZEOF, your Fortran
compiler *has* support the INTERFACE keyword and ISO_FORTRAN_ENV. We
can't use "ignore TKR" functionality, because the whole point of
MPI_SIZEOF is that the implementation knows what type was passed to it
("ignore TKR" functionality, by definition, throws that information
away). Hence, we have to have an MPI_SIZEOF interface+implementation
for all intrinsic types, kinds, and ranks.
This commit therefore adds a perl script that generates both the
interfaces and implementations for MPI_SIZEOF in each of mpif.h, the
mpi module, and mpi_f08 module (yay consolidation!).
The perl script uses the results of some new configure tests:
* check if the Fortran compiler supports the INTERFACE keyword
* check if the Fortran compiler supports ISO_FORTRAN_ENV
* find the max array rank (i.e., dimension) that the compiler supports
If the Fortran compiler supports both INTERFACE and ISO_FORTRAN_ENV,
then we'll build the MPI_SIZEOF interfaces. If not, we'll skip
MPI_SIZEOF in mpif.h and the mpi module. Note that we won't build the
mpi_f08 module -- to include the MPI_SIZEOF interfaces -- if the
Fortran compiler doesn't support INTERFACE, ISO_FORTRAN_ENV, and a
whole bunch of ther modern Fortran stuff.
Since MPI_SIZEOF interfaces are now generated by the perl script, this
commit also removes all the old MPI_SIZEOF implementations (which were
laden with a zillion #if blocks).
cmr=v1.8.3
This commit was SVN r32764.