Algorithm allows user to specify the segment size to be used for computation/communication overlap.
The additional memory requirement for the algorithm is 2 x segment size.
It performed well for (really) large message sizes over MX and it passed intel Allreduce_c and Allreduce_loc_c tests.
This commit was SVN r13832.
Long unexpected messages were not generating PUT_START events
because the MD for long unexpected messages was configured to
ignore start events. When a long unexpected message arrived, it
traversed the match list, and ended up in the long unexpected MD.
As the long message is being consumed, the code called PtlMDUpdate()
to look for the message, but there was no event that indicated
that it had arrived. So, the update succeeded. Once the long
unexpected message was consumed, the PUT_END event showed up in the
event queue -- except the code wasn't looking for it anymore.
The PUT_START events exist specifically to handle ordering between
short and long unexpected messages, so PUT_START events can't be
ignored on long unexpected messages.
Modified the code to generate PUT_START events for both long and
short unexpected messages and handle matching up START and END
events appropriately.
This commit was SVN r13746.
- the block sizes are computed in more uniformn way.
The first k blocks may be 1 element larger than the remaining blocks.
The algorithm passed Intel Allreduce_c and Allreduce_loc_c tests, and
IMB-3.2 Allreduce, over TCP and both btl and mtl MX (up to 128 processes).
The algorithm still only supports commutative operations.
This commit was SVN r13738.
- Implement the BML/r2 finialize funciton
- Cleanup the btl close routine
- Wire up a pml_base_verbose MCA parameter so you can actually watch the PML selection logic if you really want to.
- Fix a potental segfault in the selection logic.
ompi_pointer_array_get_item() may return NULL, so we have to check for it
This commit was SVN r13734.
The following SVN revision numbers were found above:
r2 --> open-mpi/ompi@58fdc18855
outstanding requests can be limited using mca parameters.
The implementation passed Intel, IMB-3.2, and mpi_test_suite tests over
TCP and MX up to 128 processes (64 nodes), on both 32-bit and 64-bit machines.
It is not activated by default, but it should be useful for really large
communicator sizes.
This commit was SVN r13720.
middle of a struct). Now we properly define and name the union
outside the struct and simply create an instance of it inside the
struct.
This commit was SVN r13709.
Implementation passed intel: MPI_Reduce_c , MPI_Reduce_loc_c, and MPI_Reduce_user_c tests
over TCP, BTL MX, and MTL MX, as well as, mpi_test_suite Reduce tests (up to 64 nodes).
The algorithm is still not activated by decision function (will be in the near future).
This commit was SVN r13657.
when on the Cray machine (aka when the NULL GPR is in use).
This commit was SVN r13638.
The following SVN revision numbers were found above:
r13582 --> open-mpi/ompi@041beeb1b6
buffer fails. If cb is already allocated, but it is full and allocation of
additional cb fails, we spin waiting for receiver to free space in existing
cb.
This commit was SVN r13635.
adding new procs that the remote proc's pml is the same as our local pml.
Turns the hangs from mismatched PMLs into an abort, which is better,
I think.
This commit was SVN r13582.
open MTL MX and BTL MX and initialize them at the same time. The problem is
that both call mx_init and mx_finalize, solution is to add an external entity
that does the init and finalize (based on ref counting).
This commit was SVN r13576.
The step used to iterate through buffer was function of true_extent instead of extent.
This may or may not solve ticket #689 because I am still getting failures over btl mx,
but I cannot reproduce failures over mtl mx nor tcp.
This commit was SVN r13459.
* have the mpool size be based on MCW, not num procs
in other jobs we know about. Solves the problem of
the spawned job having a much bigger than needed
sm file
* Can't assume that "me" is in the list of procs
passed to addprocs, so need to use slightly different
logic and not go through all of add procs unless
there's a proc in my job that isn't me.
This seems to greatly improve the situation, although
there still seems to be more of a slowdown through
MPI_INIT for the children (if there are more than one
child) than MPI_INIT for the parent if there are 'n'
children compared to 'n' parents. Hopefully that
made sense ;)
This commit was SVN r13417.
of the first entry might not be the start of the user's buffer. This is
similar to what ompi_convertor_unpack does. This is the solution for
the test case attached to ticket #690.
Refs trac:690
This commit was SVN r13397.
The following Trac tickets were found above:
Ticket 690 --> https://svn.open-mpi.org/trac/ompi/ticket/690