The step used to iterate through buffer was function of true_extent instead of extent.
This may or may not solve ticket #689 because I am still getting failures over btl mx,
but I cannot reproduce failures over mtl mx nor tcp.
This commit was SVN r13459.
1. if the user has specified sched_yield, we simply do what we are told
2. if they didn't specify anything, try to get the number of processors on this node. Note that we already now get the number of local procs in our job that are sharing this node - that now comes in through the proc callback and is stored in the ompi_proc_t structures.
3. if we can get the number of processors, compare that to the number of local procs from my job that are sharing my node. If the number of local procs exceeds the number of processors, then set sched_yield to true. If not, then be a hog and set sched_yield to false
4. if we can't get the number of processors, default to conservative behavior and set sched_yield to true.
Note that I have not yet dealt with the need to dynamically adjust this setting as more processes are added via comm_spawn. So far, we are *only* looking within our own job. Given that we have now moved this logic to mpi_init (and away from the orteds), it isn't yet clear to me how a process will be informed about the number of procs in *other* jobs that are also sharing this node.
Something to continue to ponder.
This commit was SVN r13430.
* The real fix, don't leave the OOB in blocking mode during comm_dyn_init(),
as it means no progressing MPI events while the event library is waiting
for TCP stuff to come in.
* Add many comments explaining the reasons for the current ordering
This commit was SVN r13422.
* have the mpool size be based on MCW, not num procs
in other jobs we know about. Solves the problem of
the spawned job having a much bigger than needed
sm file
* Can't assume that "me" is in the list of procs
passed to addprocs, so need to use slightly different
logic and not go through all of add procs unless
there's a proc in my job that isn't me.
This seems to greatly improve the situation, although
there still seems to be more of a slowdown through
MPI_INIT for the children (if there are more than one
child) than MPI_INIT for the parent if there are 'n'
children compared to 'n' parents. Hopefully that
made sense ;)
This commit was SVN r13417.
of the first entry might not be the start of the user's buffer. This is
similar to what ompi_convertor_unpack does. This is the solution for
the test case attached to ticket #690.
Refs trac:690
This commit was SVN r13397.
The following Trac tickets were found above:
Ticket 690 --> https://svn.open-mpi.org/trac/ompi/ticket/690
not being able to take C function pointers for either of the
copy or the delete fn. Fix by overloading the Create_keyval methods.
Fix trac #737, #738. Reviewed by jsquyres.
* A couple of cxx tests in ompi-tests (winkeyval.cc & typekeyval.cc)
will be re-enabled to regression test for this fix.
This commit was SVN r13391.
completed successfully, Bad Things(tm) could happen.
* Now we explicitly check orte_initialized (a new global in ORTE
indicating whether we are between orte_init() and orte_finalize()
or not), and if so, react accordingly.
* If ORTE is initialized, use orte_system_info.nodename; otherwise,
use gethostname().
* Add loop protection to ensure that ompi_mpi_abort() is not invoked
multiple times recursively.
This commit was SVN r13354.
know what my local rank is, and therefore set my paffinity ID as
appropriate. Specifically, we're no longer relying on the
special/secret mpi_paffinity_processor MCA parameter that the orted
would set for us.
This allows processor affinity to be used in environments where the
orted is not used (e.g., bproc, and someday in the hopefully not
too-distant future, SLURM).
This commit was SVN r13352.
The following SVN revision numbers were found above:
r13351 --> open-mpi/ompi@a338b7e533
Over to Jeff now for modifying mpi_init accordingly.
Until Jeff makes his changes, nobody should see anything different as the new info just isn't used by anything!
This commit was SVN r13351.