This commit adds a new configure option: --enable-mpi1-compat. Without
this option we will no longer provide APIs, typedefs, and defines that
were removed from the standard in MPI-3.0. This option will exist for
one major release (Open MPI v4.x.x) and then the option and associated
code will be removed in Open MPI v5.x.x. Open MPI has already
internally prepared for this change. Please prepare your codes
accordingly.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Fixes issue #5069, which relates a BigMPI bug with the use of
MPI_Type_vectpor to construct very large datatypes (>2GB).
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
Per MPI 3.1 chapter 13.3 :
"Derived etypes can be constructed by using any of the MPI
datatype constructor routines, provided all resulting typemap
displacements are non-negative and monotonically nondecreasing."
Same restriction applies to ftypes.
add the OMPI_DATATYPE_CHECK_FOR_VIEW() macro that is
check the underlying opal_datatype_t is monotonic, on top
of all checks performed in OMPI_DATATYPE_CHECK_FOR_RECV().
Since checking monotoniciy is expensive, check is only performed
when needed, but the result is cached by ompi_datatype_is_monotonic().
Thanks Wei-keng Liao for the valuable feedback.
Thanks George for the guidance.
Refs. open-mpi/ompi#4682
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
This commit renames the arithmetic atomic operations in opal to
indicate that they return the new value not the old value. This naming
differentiates these routines from new functions that return the old
value.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit eliminates the old opal_atomic_bool_cmpset functions. They
have been replaced by the opal_atomic_compare_exchange_strong
functions.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit renames the atomic compare-and-swap functions to indicate
the return value. This is in preperation for adding support for a
compare-and-swap that returns the old value. At the same time the
return type has been changed to bool.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The ompi_datatype_get_single_predefined_type_from_args() recurses down
into a constructed type to identify what base datatype it's built from
if it's built from a single type. But if the type has MPI_LB/MPI_UB,
for example
lens[0] = 1;
lens[1] = 1;
disps[0] = 0;
disps[1] = 0;
types[0] = MPI_LB;
types[1] = MPI_INT;
MPI_Type_create_struct(2, lens, disps, types, &mydt);
then this function will see the base type MPI_LB as differing from MPI_INT
and will identify mydt as not being constructed from a single base type, so
the type will be rejected for calls like MPI_Accumulate.
I think those "meta data" types shouldn't result in rejection like that, and
the above mydt should still be identified as having a single base type
of MPI_INT.
Addition: boslica wanted another change discussed here
https://github.com/open-mpi/ompi/pull/3609
relating to the calculation for "count" after identifying the
predefined_type that was being used.
Signed-off-by: Mark Allen <markalle@us.ibm.com>
Example (using MPI_ORDER_C so the below has 6 rows of 4 ints to parcel out)
size = 4;
rank = 0;
ndims=2;
gsizes[0] = 6;
gsizes[1] = 4;
distribs[0] = MPI_DISTRIBUTE_CYCLIC;
distribs[1] = MPI_DISTRIBUTE_BLOCK;
dargs[0] = 2;
dargs[1] = 2;
psizes[0] = 2;
psizes[1] = 2;
MPI_Type_create_darray(size, rank, ndims,
gsizes, distribs, dargs, psizes,
MPI_ORDER_C, MPI_INT, &mydt);
Expectation for the layout:
inner dimension (1) is
4 items (ints) distributed block over 2 ranks with 2 items each
eg for rank 0: [ x x . . ]
outer dimension (0) is:
6 items (the above [ x x . .]) cyclic over 2 ranks with 2 items each
eg for rank 0:
[ x x . . ] : offset=0 bytes=8
[ x x . . ] : ofset=16 bytes=8
[ . . . . ]
[ . . . . ]
[ x x . . ] : offset=64 bytes=8
[ x x . . ] : offset=80 bytes=8
Or more specifically a stream of ints 0,1,2,3,4,5,6,7 sent into that
type should be
[ 0 1 . . ]
[ 2 3 . . ]
[ . . . . ]
[ . . . . ]
[ 4 5 . . ]
[ 6 7 . . ]
The data was laying out though as
[ 0 1 2 3 ]
[ . . . . ]
[ . . . . ]
[ . . . . ]
[ 4 5 6 7 ]
[ . . . . ]
because the recursive construction inside the block() function (which
creates the smaller row datatype [ x x . . ]) wasn't setting the extent
of that type.
Signed-off-by: Mark Allen <markalle@us.ibm.com>
Convert the predefined MPI object padding to a fixed number of bytes
(vs. a multiple of sizeof(void*)) so that the padding is the same size
between 32 and 64 bit builds. I.e., we won't have a situation where
we've run out of padding in 32 bit builds but still have more space
available in 64 bit builds.
Fixes#3610
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
* Don't overflow the internal datatype count.
Change the type of the count to be a size_t (it does not alter the total
size of the internal structures, so has no impact on the ABI).
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Optimize the datatype creation.
The internal array of counts of predefined types is now only created
when needed, which is either in a heterogeneous environment, or when
one call get_elements. It saves space and makes the convertor creation a
little faster in some cases.
Rearrange the fields in the datatype description structs.
The macro OPAL_DATATYPE_INIT_PTYPES_ARRAY had a bug, and the
static array was only partially created. All predefined types should
have the ptypes array created and initialized.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Fix the boundary computation.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* test/datatype: add test for short unpack on heteregeneous cluster
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Trying to reduce the cost of creating a convertor.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Respect the unpack boundaries.
As Gilles suggested on #2535 the opal_unpack_general_function was
unpacking based on the requested count and not on the amount of packed
data provided.
Fixes#2535.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
since Open MPI now requires a C99, and ptrdiff_t type is part of C99,
there is no more need for the abstract OPAL_PTRDIFF_TYPE type.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
* Complete rewrite of opal_pointer_array
Instead of a cache oblivious linear search use a bits array
to speed up the management of the free space. As a result we
slightly increase the memory used by the structure, but we get a
significant boost in performance.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
* Do not register datatypes in the f2c translation table.
The registration is now done up into the Fortran layer, by
forcing a call to MPI_Type_c2f.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
Array sizes of `array_of_gsizes`, `array_of_distribs`, `array_of_dargs`,
and `array_of_psizes` parameters of the `ompi_datatype_create_darray`
function (and `MPI_TYPE_CREATE_DARRAY`) are all `ndims`.
`ndims` are `i[2]`, not `i[0]`. See MPI-3.1 p.122.
Because this function `__ompi_datatype_create_from_args` is used by
pt2pt OSC, using a datatype created by `MPI_TYPE_CREATE_DARRAY` for
`MPI_(R)(GET_)ACCUMULATE` caused a segmentation fault or something
on a target process.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
This commit fixes errors in the lb and extent of darray datatypes. For
these datatypes the lb should be the start offset of the rank's data
in the array and the extent should be the size of the entire
datatype. In master the lb was always 0 and the extent was always to
small. This commit updates the call to opal_datatype_resize to set the
correct lb and fixes the extent calculation.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The name of `MPI_INTEGER16` obtained using `MPI_TYPE_GET_NAME`
from Fortran program was incorrect (`MPI_INTEGER8` was obtained)
when `INTEGER*16` is not supported by a compiler.
This bug affects only the Fortran binding because `MPI_INTEGER16`
is not defined in `mpi.h` if a compiler does not support it.
This commit add the following Fortran named constants which are
defined in the MPI standard but are missing in Open MPI.
- `MPI_LONG_LONG` (defined as a synonym of `MPI_LONG_LONG_INT`)
- `MPI_CXX_FLOAT_COMPLEX`
- `MPI_C_BOOL`
And this commit also changes the value of the following Fortran
named constant for consistency.
- `MPI_C_COMPLEX`
`(MPI_C_FLOAT_COMPLEX` is defined as a synonym of this)
Each needs a different solution described below.
For `MPI_LONG_LONG`:
The value of `MPI_LONG_LONG` is defined to have a same value
as `MPI_LONG_LONG_INT` because of the following reasons.
1. It is defined as a synonym of `MPI_LONG_LONG_INT` in
the MPI standard.
2. `MPI_LONG_LONG_INT` and `MPI_LONG_LONG` has a same value
for C in `mpi.h`.
3. `ompi_mpi_long_long` is not defined in
`ompi/datatype/ompi_datatype_module.c`.
For `MPI_CXX_FLOAT_COMPLEX`:
Existing `MPI_CXX_COMPLEX` is replaced with `MPI_CXX_FLOAT_COMPLEX`
bacause `MPI_CXX_FLOAT_COMPLEX` is the right name defined in MPI-3.1
and `MPI_CXX_COMPLEX` is not defined in MPI-3.1 (nor older).
But for compatibility, `MPI_CXX_COMPLEX` is treated as a synonym
of `MPI_CXX_FLOAT_COMPLEX` on Open MPI.
For `MPI_C_BOOL`:
`MPI_C_BOOL` is newly added. The value which `MPI_C_COMPLEX` had
used (68) is assinged for it because the value becomes no longer
in use (described later) and it is a suited position as a datatype
added on MPI-2.2.
For `MPI_C_COMPLEX`:
Existing `MPI_C_FLOAT_COMPLEX` is replaced with `MPI_C_COMPLEX`
and `MPI_C_FLOAT_COMPLEX` is changed to have the same value.
In other words, make `MPI_C_COMPLEX` the canonical name and
make `MPI_C_FLOAT_COMPLEX` an alias of it.
This is bacause the relation of these datatypes is same as
the relation of `MPI_LONG_LONG_INT` and `MPI_LONG_LONG`, and
`MPI_LONG_LONG_INT` and `MPI_LONG_LONG` are implemented like that.
But in the datatype engine, we use `ompi_mpi_c_float_complex`
instead of `ompi_mpi_c_complex` as a variable name to keep
the consistency with the other similar types such as
`ompi_mpi_c_double_complex` (see George's comment in open-mpi/ompi#1927).
We don't delete `ompi_mpi_c_complex` now because it is used in
some other places in Open MPI code. It may be cleand up in the future.
In addition, `MPI_CXX_COMPLEX`, which was defined only in the Open MPI
Fortran binding, is added to `mpi.h` for the C binding.
This commit breaks binary compatibility of Fortran `MPI_C_COMPLEX`.
When this commit is merged into v2.x branch, the change of
`MPI_C_COMPLEX` should be excluded.
According to MPI-3.1 P.122, `ni` for `MPI_COMBINER_DARRAY`
should be `4*ndims+4`, not `4*size+4`.
This bug may cause SEGV if `size` is smaller than `ndims`
when the darray is used for one-sided communication (pt2pt OSC).
This bug was introduced in open-mpi/ompi@79b13f36 (when darray
became a first class citizen and the `a_i` index of darray was
shifted by 2). The corresponding `MPI_Type_create_darray()`
function sets a right value so we don't need to update the function.
According to MPI-3.1 P.121, `ni` for `MPI_COMBINER_HINDEXED_BLOCK`
should be `2`, not `2 + count`.
This bug was introduced in 113b45b4 (when `MPI_Type_create_hindexed_block`
support is added in Open MPI) and fixed partially in 7f5314ee and 8de93982.
This commit fixes the remaining part.
Probably this bug has no user impact. It only consumes a bit more memory.
Add checks to bail out if our precomputed value is less
than needed (we are already at fault).
bot:milestone:v1.10.3
bot:milestone:v2.0
bot🏷️bug
bot:assign: @ggouaillardet
When building an empty datatype (aka. size = 0) because the count of
included datatypes is 0, be less strict on what the arguments are
(allow NULL pointers).
MPI_LONG_LONG_INT is a named predefined datatype, so its name is now MPI_LONG_LONG_INT
MPI_LONG_LONG is a synonym of MPI_LONG_LONG_INT, and its name is also MPI_LONG_LONG_INT
* datatype: Fix a incorrect datatype name of `MPI_UNSIGNED`
Name of predefined datatype for C `unsigned int` gotten by
`MPI_TYPE_GET_NAME` should be `MPI_UNSIGNED`, not `MPI_UNSIGNED_INT`.
* datatype: Fix incorrect datatype names of `MPI_C_BOOL` and `MPI_CXX_*`
Names of predefined datatypes gotten by `MPI_TYPE_GET_NAME` are:
after this commit (correct) | before this commit (incorrect)
-----------------------------------------------------------
MPI_C_BOOL MPI_BOOL
MPI_CXX_BOOL MPI_BOOL
MPI_CXX_FLOAT_COMPLEX MPI_C_FLOAT_COMPLEX
MPI_CXX_DOUBLE_COMPLEX MPI_C_DOUBLE_COMPLEX
MPI_CXX_LONG_DOUBLE_COMPLEX MPI_C_LONG_DOUBLE_COMPLEX
* datatype: Fix a incorrect datatype name of `MPI_2DOUBLE_PRECISION`
Name of the predefined datatype for Fortran two `double precision`
gotten by `MPI_TYPE_GET_NAME` should be `MPI_2DOUBLE_PRECISION`,
not `MPI_2DBLPREC`.
This bug was caused by setting the name to `opal_datatype_t::name`
instead of `ompi_datatype_t::name`.
* datatype: Fix `MPI_UNSIGNED_CHAR` internal flag
`MPI_UNSIGNED_CHAR` is an integer type.
* ompi/cxx: Fix C++ `MPI::LONG_DOUBLE_INT` definition
Just a typo fix. Without this fix, `MPI::MAX_LOC` and `MPI::MIN_LOC`
cannot be used with `MPI::LONG_DOUBLE_INT` in C++ programs.
I know the C++ binding is obsolete, but fixing this is harmless.
* Add FUJITSU copyright
This commit makes ompi_datatype_get_pack_description thread safe. The
call is used by osc/pt2pt to send the packed description to remote
peers. Before this commit if MPI_THREAD_MULTIPLE is enabled and the
user uses MPI_Put, MPI_Get, etc we could hit a race where multiple
threads attempt to store the packed description on the datatype. Since
the code in question is not performance-critical the threading fix
uses opal_atomic_* calls instead of bothering with OPAL_THREAD_*.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit does two things. It removes checks for C99 required
headers (stdlib.h, string.h, signal.h, etc). Additionally it removes
definitions for required C99 types (intptr_t, int64_t, int32_t, etc).
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
data that must be aligned (aka the displacement). All other
cases do not require special alignments, and are treated
normally.
Fix the comment regarding the alignment requirements.