- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Commit open-mpi/ompi@1a3597aam changed the type of the `convertor`
variable from `ompi_osc_base_convertor_t` (which contained an
`opal_convertor_t`) to an `opal_convertor_t`. Hence, using memchecker
to ensure that the inner convertor of the `ompi_osc_base_convertor_t`
is considered initialized is now unnecessary.
With certain datatypes the opal_datatype_unpack method for performing
the accumulate operation does not work. This commit modifies the
accumulate code in the osc base to use opal_convertor_raw instead.
Fixes#385
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
It might be possible (don't know) for a datatype to made of a contiguous block
of a primitive datatype and have an lb. If this is ever the case the code
would have done the wrong thing. Add the lb in to be safe.
cmr=v1.8:reviewer=jsquyres
This commit was SVN r31283.
This fixes more issues identified by armci. More issues still remain and fixes are
coming for those as well.
cmr=v1.8:reviewer=jsquyres
This commit was SVN r31272.
the case fix in ompi_osc_base_process_op in r31204.
There are two cases that needed to be handled:
- The target is a simple datatype (contiguous block of a primitive
type) but the origin is not. In this case we still need to pack
the origin data but we can not rely on the convertor to do the
unpack (see r31204).
- Both the origin and target datatypes are simple datatypes. In this
case we can use ompi_op_reduce to do the accumulation without having
to pack the origin data.
cmr=v1.8:ticket=trac:4449
This commit was SVN r31231.
The following SVN revision numbers were found above:
r31204 --> open-mpi/ompi@949abe45cd
The following Trac tickets were found above:
Ticket 4449 --> https://svn.open-mpi.org/trac/ompi/ticket/4449
of the primitive datatype
In this case we can not use the convertor to run the accumulate operation
since the datatype is a more or less a primitive type.
cmr=v1.8:ticket=trac:4449
This commit was SVN r31222.
The following Trac tickets were found above:
Ticket 4449 --> https://svn.open-mpi.org/trac/ompi/ticket/4449
This commit fixes two issues:
- osc/rdma: The target side of an accumulate was using the target datatype
in the receive to the packed buffer. This was conflicting with the way
the reduction is done into the target buffer. Changed the receive to use
the primitive datatype.
- osc/base: The copy table was completely wrong. Fixed the table to match
the underlying datatypes (which are opal not ompi datatypes).
- osc/base: There is a problem using the optimized description. Fall back
on using the non-optimized description until we can understand what is
going wrong.
cmr=v1.8:reviewer=jsquyres
This commit was SVN r31204.
be applied. Correct the MPI validation process of the
MPI_Accumulate arguments.
Fix another potential problem not yet reported. If we convert the
MPI datatypes direclty into OPAL datatypes, we will restrict their
number to the locally different types. Which might not be identical
on the remote node, if we are in a heterogeneous environment. So,
for MPI One sided only deal with MPI level types, never simplify
them on OPAL types (at least on the args). The unfortunate
outcome is that we need to create the args for all datatypes.
This commit was SVN r24466.
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
not end up in OPAL
- Will post an updated patch for the OMPI_ALIGNMENT_ parts (within C).
This commit was SVN r21342.
The following SVN revision numbers were found above:
r21330 --> open-mpi/ompi@95596d1814
into the OPAL namespace, eliminating cases like opal/util/arch.c
testing for ompi_fortran_logical_t.
As this is processor- and compiler-related information
(e.g. does the compiler/architecture support REAL*16)
this should have been on the OPAL layer.
- Unifies f77 code using MPI_Flogical instead of opal_fortran_logical_t
- Tested locally (Linux/x86-64) with mpich and intel testsuite
but would like to get this week-ends MTT output
- PLEASE NOTE: configure-internal macro-names and
ompi_cv_ variables have not been changed, so that
external platform (not in contrib/) files still work.
This commit was SVN r21330.