the data was buffered by the MX library. If it's the case then we declare
the send as completed and disable the completion event for the mx request.
This commit was SVN r12935.
protocol over the MX BTL. Now, we have only one matching, the one in Open
MPI.
The problem is that when the unexpected handler is triggered, not all the
message is on the host memory. In the best case we get one MX fragment (internal
MX fragment), in the worst we get NULL. The only way to fit this with the
design of the PML is to force the eager protocol at the MX internal fragment
size, and to limit the send/receive protocol at the same size. Tests show
the outcome is not far from optimal (if the pipeline depth is increased
a little bit).
Set MX_PIPELINE_LOG in order to allow MX to use internal fragments of 4K.
This commit was SVN r12930.
performance on a 2G Myrinet card, as it look like pipelining the messages
by 1M is faster than a simple send/receive. However, when using a 10G card
the send/receive will limit the maximum bandwidth to 2.5Gbs. The reason is
the scarce bus resources that have to be shared between the Myrinet hardware
and the memcpy operation. The PUT protocol remove the memcpy, we now have a
true zero-copy mechanism. But, there is no pipelining yet as it look like the
RDMA pipeline somehow disappeared from the OB1 PML ...
This commit was SVN r12925.
add_error_class and add_error_code files. Also fixed the update of the
lastusedcode attribute, all of work according to my tests pretty fine.
Please note: the testcode attached to the bug 683 still reports some bugs. I
am however pretty sure that the testcode is wrong at that points:
- the standard says that the attribute MPI_LASTUSEDCODE has to be updated for
a new error_class or a new error_code. The test currently assumes, that only
the add_error_code call changes the attribute value.
- you have to comment out the two lines 73 and 74 in order to make the
test finish, since these lines check for the error string of non-existent
codes.
- line 126 the error-string of MPI_ERR_ARG is not "invalid argument" but a
little bit more, so the test thinks the output is wrong. So probably the test
has to be update to match the according error string of MPI_ERR_ARG.
Fixes trac:682
This commit was SVN r12913.
The following Trac tickets were found above:
Ticket 682 --> https://svn.open-mpi.org/trac/ompi/ticket/682
It contains four algorithms:
Bruck (ciel(logP) steps), Recursive Doubling (log(P) for power-of-2 processes), Ring (P-1 steps),
and Neighbor Exchange (P/2 steps for even number of processes).
All algorithms passed occ, IMB-2.3, and intel verification tests from ompi-tests/ for up to 56 processes.
The fixed decision function is based on results collected over MX on the Grig cluster at
the University of Tennessee at Knoxville.
I have also added (and commented out) copy of MPICH2 decision function for allgather
(from their IJHPCA 2005 paper).
This commit was SVN r12910.
which can cause segfaults on shutdown. Calling mx_finalize() isn't enough
to shutdown the thread, so must close endpoints as well.
Refs trac:513
This commit was SVN r12908.
The following Trac tickets were found above:
Ticket 513 --> https://svn.open-mpi.org/trac/ompi/ticket/513
make as well as gmake. To do this, added two new MACORS
and modified one dynamic variable.
Refs trac:533
This commit was SVN r12900.
The following Trac tickets were found above:
Ticket 533 --> https://svn.open-mpi.org/trac/ompi/ticket/533
rc (which is -1 or 4 if we hit this case) resulted in an odd error that a
signal killed the proc (instead of a startup error, as is reality).
Instead, use the W_EXITCODE macro (if available) to build up an exit
code that has an error code for exit status, but does not make it look
like the process died from a signal
This commit was SVN r12890.
environment variable, so it's not so useful (arg!). Instead, get the
hostname during opal_init(). Don't want to call gethostname() during
the signal handler. While we're at it, only print the machine name
so that the output isn't so wide
Refs trac:538
This commit was SVN r12886.
The following Trac tickets were found above:
Ticket 538 --> https://svn.open-mpi.org/trac/ompi/ticket/538
* Print out header that a signal was received
Refs trac:538
This commit was SVN r12884.
The following Trac tickets were found above:
Ticket 538 --> https://svn.open-mpi.org/trac/ompi/ticket/538
* Have darwin backtrace code return an error when buffer() is
called, since it is not imnplemented
* Print out hostname & pid when giving signal information
* If backtrace_buffer() is implemented, use that instead of
backtrace_print() and prefix stacktrace with the hostname
* Make the signal information printed be more user friendly
* If we're using the backtrace_buffer() code, don't print
the last two functions (which will be show_stackframe()
then backtrace_buffer()) so that users won't keep thinking
the error occurred inside Open MPI (sneaky, yes...)
Refs trac:538
This commit was SVN r12883.
The following Trac tickets were found above:
Ticket 538 --> https://svn.open-mpi.org/trac/ompi/ticket/538
if the remote architecture differs from the local architecture and the
btl doesn't support heterogeneous transport.
Refs trac:587
This commit was SVN r12879.
The following Trac tickets were found above:
Ticket 587 --> https://svn.open-mpi.org/trac/ompi/ticket/587
udapl/openib/vapi/gm mpools a deprecated. rdma mpool has parameter that allows
to limit its size mpool_rdma_rcache_size_limit (default is 0 - unlimited).
This commit was SVN r12878.
process when creating a datatype from an internal description.
Refs trac:640
This commit was SVN r12877.
The following Trac tickets were found above:
Ticket 640 --> https://svn.open-mpi.org/trac/ompi/ticket/640
Move the req_mtl structure back to the end of each of the structures in
the CM PML. The req_mtl structure is cast into a mtl_*_request_structure
for each MTL, which is larger than the req_mtl itself. The cast will cause
the *_request to overwrite parts of the heavy requests if the req_mtl
isn't the *LAST* thing on each structure (hence the comment). This was
moved as an optimization at some point, which caused buffer sends to fail...
Refs trac:669
This commit was SVN r12873.
The following SVN revision numbers were found above:
r12871 --> open-mpi/ompi@597598b712
The following Trac tickets were found above:
Ticket 669 --> https://svn.open-mpi.org/trac/ompi/ticket/669
CM PML. The req_mtl structure is cast into a mtl_*_request_structure for
each MTL, which is larger than the req_mtl itself. The cast will cause
the *_request to overwrite parts of the heavy requests if the req_mtl
isn't the *LAST* thing on each structure (hence the comment). This was
moved as an optimization at some point, which caused buffer sends to
fail...
Refs trac:669
This commit was SVN r12871.
The following Trac tickets were found above:
Ticket 669 --> https://svn.open-mpi.org/trac/ompi/ticket/669
1. For OS's without the dirent.d_type field, we were potentially
not initializing a filename string. This could result in a
directory not being cleaned up.
2. Potential memory leaks in filename strings that were allocated.
Refs trac:678
This commit was SVN r12853.
The following Trac tickets were found above:
Ticket 678 --> https://svn.open-mpi.org/trac/ompi/ticket/678