r12486 but somehow I miss it.
Update the pack and unpack functions for contigusous datatypes to minimize their
impact on the performance. Keep them as condensed as possible.
This commit was SVN r12513.
The following SVN revision numbers were found above:
r12486 --> open-mpi/ompi@8746369338
usefull things are now done without the call to any external function,
directly in the macros in this file.
Create a new flag for the convertor to help figure out when we have
to do anything special for pack/unpack. A convertor with the flag
CONVERTOR_NO_OP is a basic convertor, designed for contiguous types.
This commit was SVN r12485.
set it up before the match when we know the peer, saving some
time on the critical path. If the receive is ANY_SOURCE then
we initialize the convertor on _MATCHED. Anyway, we will set it
up only once per receive.
This commit was SVN r12484.
N gatherv's:
for (i = 0 ... size)
MPI_Gatherv(..., root = i, ...)
The new algorithm simply does (effectively):
MPI_Gatherv(..., root = 0, ...)
MPI_Bcast(..., root = 0, ...)
This commit was SVN r12469.
* Add MPI::Status methods Set_elements() and Set_cancelled()
* Added a bunch of comments in various places in the MPI C++ bindings
implementatio just to explain what's going on (because C++ can hide
a lot from you)
* Insert C++ callbacks for the MPI_Grequest callback functions
registered by MPI::Grequest::Start(). These callbacks keep a
little meta-data (created by Grequest::Start()) that allow the
proper callback signatures from C (i.e., from ompi_grequest_<foo>()
in libmpi.a -- C code), translate arguments as required, and then
invoke the callbacks with proper C++ signatures (i.e., call
user-defined callbacks with C++ function signatures).
This commit was SVN r12446.
The following Trac tickets were found above:
Ticket 580 --> https://svn.open-mpi.org/trac/ompi/ticket/580
1. Added reporting points around the xcasts in MPI_Init. Note that these times will include time spent waiting for a trigger to fire, which is why the times between stage gates did NOT include these times initially. The inter-stage-gate times still do NOT include the xcast time - the xcast time is reported separately.
2. Added the process vpid on the MPI_Init timing reports for clarity.
3. Added a report from the xcast function on the HNP that outputs the number of bytes in the message being sent to the processes.
This commit was SVN r12422.
size of the complex type as determined by configure; not the size of
the next larger complex type (i.e., a complex*N is 2 real*(N/2)'s, not
2 real*N's).
This commit was SVN r12421.
* Add some more error checking to GREQUEST_START
* Move the error checking in GREQUEST_COMPLETE up to inside the
MPI_PARAM_CHECK block, where it belongs
* Invoke the gen request query_fn in all the Right spots (per MPI-2:8.2)
* Distinguish between grequests created from C and Fortran
* Use the OBJ system to reference count to release the grequest at
the Right time and invoke the grequest free_fn properly (see
lengthy comment in grequest.c above the destructor)
* Have ompi_grequest_complete() call ompi_request_complete() rather
than [poorly] copy the contents of ompi_request_complete()
* Fix Fortran function callback pointer typedefs to use proper
Fortran types
* Edit ompi_request_test* and ompi_request_wait* to properly handle
generalized requests. This adds an "if" statement in the critical
path for all the back-end test* and wait* functions :-(,
but fortunately George took out two "if" statements from the
critical path last week. So we're still ahead. :-)
* Move ompi_request_test() out of request.h and into request.c (all
other test* and wait* functions were already in the .c file -- and
ompi_request_test() was too long to be statically inlined anyway)
This commit was SVN r12402.
The following Trac tickets were found above:
Ticket 496 --> https://svn.open-mpi.org/trac/ompi/ticket/496
Had group discussion about this on the weekly call. The decision was
that we should pass the real error code to the back-end exception
handler because it's pretty useless to pass MPI_ERR_IN_STATUS to the
back-end exception handler (because exception handlers don't have
access to the request or the status - this has potential issues for
fault tolerance kinds of scenarios). So in TESTALL, TESTSOME,
WAITALL, and WAITSOME, we examine the error code and if it's not
MPI_SUCCESS, return MPI_ERR_IN_STATUS.
This commit was SVN r12389.
The following Trac tickets were found above:
Ticket 549 --> https://svn.open-mpi.org/trac/ompi/ticket/549
mca_btl_openib_endpoint_connect_eager_rdma() is called recursively. He also
noticed that orte_pointer_array_add() can't fail because we allocate max number
of elements at init time. So just remove error handling and locking. No locking
- no deadlocks.
This commit was SVN r12388.
something is going wrong down in the code it is removed from the array. So add
mutex to prevent concurrent access to the array from different threads.
This commit was SVN r12385.
Setup subscriptions to correctly return the MPI_APPNUM attribute.
Fix an unreported bug that was found. The universe size was incorrectly defined in the attributes code. As coded, it looked for size_t values and based its size computation on those numbers. Unfortunately, the node_slots value had been changed to an orte_std_cntr_t awhile back! So the universe size was never updated.
Update the hello_nodename test to check for MPI_APPNUM.
Add a definition to ns_types for ORTE_PROC_MY_NAME - just a shortcut for orte_process_info.my_name. Brought over from ORTE 2.0 as it will be used extensively there.
This commit was SVN r12377.
What's happening is that we're holding openib_btl->eager_rdma_lock when
we call mca_btl_openib_endpoint_send_eager_rdma() on
btl_openib_endpoint.c:1227. This in turn calls
mca_btl_openib_endpoint_send() on line 1179. Then, if the endpoint
state isn't MCA_BTL_IB_CONNECTED or MCA_BTL_IB_FAILED, we call
opal_progress(), where we eventually try to lock
openib_btl->eager_rdma_lock at btl_openib_component.c:997.
The fix removes this lock altogether. Instead we atomically set local RDMA
pointer to prevent other threads to create rdma buffer for the same endpoint.
And we increment eager_rdma_buffers_count atomically thus polling thread doesn't
need lock around it.
This commit was SVN r12369.