to make checks for MPI-implementations fail in the right way ,-]
- check in configure.ac
- BINARY INCOMPATIBLE change to mpif-common.h
(if implemented the *right* way)
Actually OMPI_F90_CHECK takes two arguments, not three.
- Only have corresponding C-Type, if the opt. Fortran
type is really supported,
Otherwise pass ompi_mpi_unavailable to DECLARE_MPI_SYNONYM_DDT;
- Reviewed by George and Jeff
This commit was SVN r15133.
for the C++ bindings in MPI-2 p276-278 to see that MPI_BOOL should
work with MPI_LAND, MPI_LOR, and MPI_LXOR. Thanks to Andy Selle for
pointing this out.
This commit was SVN r9200.
- move files out of toplevel include/ and etc/, moving it into the
sub-projects
- rather than including config headers with <project>/include,
have them as <project>
- require all headers to be included with a project prefix, with
the exception of the config headers ({opal,orte,ompi}_config.h
mpi.h, and mpif.h)
This commit was SVN r8985.
implementation was not thread safe). See lengthy comment in
ompi/mpi/cxx/intercepts.cc::ompi_mpi_cxx_op_intercept() for a full
explanation.
This commit was SVN r8606.
MPI_UNSIGNED_LONG_LONG, MPI_LONG_LONG, and MPI_LONG_LONG_INT --
although I already had implementations of all the relevant functions
for these types. Doh!
This commit was SVN r7944.
REDUCE_SCATTER to more thoroughly check the datatype/op combination
to see if it's valid or not. If it's not, print a meaningful error
message rather than "Invalid MPI_Op" indicating what specifically
was wrong (therefore hopefully helping users track down where in the
code the problem is, and/or telling us that there's a reduction
operation combo that we don't support that we should)
- The check for whether a datatype is intrinsic needed to be updated
-- it's not sufficient to check that dtype->id < DT_MAX_PREDEFINED;
you really need to check the PREDEFINED flag on the datatype.
Thanks to George for this fix (only intrinsics have a meaningful
value in dtype->id).
This commit was SVN r7923.
at the top-level MPI API function. This allows two kinds of
scenarios:
1. MPI_Ireduce(..., op, ...);
MPI_Op_free(op);
MPI_Wait(...);
For the non-blocking collectives that we're someday planning -- to
make them analogous to non-blocking point-to-point stuff.
2. Thread 1:
MPI_Reduce(..., op, ...);
Thread 2:
MPI_Op_free(op);
Granted, for #2 to occur would tread a fine line between a correct and
erroneous MPI program, but it is possible (as long as the Op_free was
*after* MPI_reduce() had started to execute). It's more realistic
with case #1, where the Op_free() could be executed in the same thread
or a different thread.
This commit was SVN r7870.