- Delete unnecessary header files using
contrib/check_unnecessary_headers.sh after applying
patches, that include headers, being "lost" due to
inclusion in one of the now deleted headers...
In total 817 files are touched.
In ompi/mpi/c/ header files are moved up into the actual c-file,
where necessary (these are the only additional #include),
otherwise it is only deletions of #include (apart from the above
additions required due to notifier...)
- To get different MCAs (OpenIB, TM, ALPS), an earlier version was
successfully compiled (yesterday) on:
Linux locally using intel-11, gcc-4.3.2 and gcc-SVN + warnings enabled
Smoky cluster (x86-64 running Linux) using PGI-8.0.2 + warnings enabled
Lens cluster (x86-64 running Linux) using Pathscale-3.2 + warnings enabled
This commit was SVN r21096.
deactivated by default. It is activated by setting either of the
following two MCA parameters to values greater than 0:
* coll_sync_barrier_before
* coll_sync_barrier_after
If !_before is >0, then the sync coll collective will insert itself
before the underlying collective operations and invoke a barrier
before every Nth barrier (N == coll_sync_barrier_before). Similar for
!_after. Note that N is a _per communicator_ value; not global to the
MPI process.
If both are 0 (which is the default), this component returns NULL for
the comm query, meaning that it is not insertted into the coll module
stack.
The intent of this component is to provide a a workaround for
applications with large numbers of collectives of short messages that
can cause unbounded unexpected messages. Specifically, it is possible
for some iterative collective communication patterns to cause
unbounded unexpected messages. Forcing a barrier before or after
every Nth collective operation would prevent that behavior by forcing
applications to synchronize (and thereby consume any outstanding
unexpected messages caused by collectives on the same communicator).
Open MPI still needs to bound unexpected messages resource consumption
at the receiver, but this is a viable workaround for at least some
symptoms of the problem.
Additionally, there has been anecdotal evidence of some applications
that "perfom better" when they put barriers after other collective
operations. This could be due to many factors -- including shortening
the unexpected message queue. Putting this component in Open MPI
allows people to try this with their own applications and give real
world feedback on this kind of behavior.
This commit was SVN r20584.