I found only two places that were looking at the tokens:
1. the odls - we used the tokens to separately process the globals container data from everything else. In this case, I left the subscription that returned the globals data alone, but "stripped" the subscription that returned the launch data for the procs. These subscriptions have nothing to do with the xcast message.
2. the pml_base_modex - the callback function was getting process names from the returned tokens. Actually, this function was doing a very bad thing - it was assuming that the first token returned was *always* the process name. This is currently true, but is one of those assumptions that someone could have easily changed - and suddenly found the system inexplicably failing. I modified the function to (a) get the name sent back to us, (b) "stripped" the value structures of tokens and segment strings, and (c) correctly obtained process names from the returned values. I also reindented the heck out of the code so it was legible (at least, to my old eyes).
This commit was SVN r12813.
This commit fixes several aspects regarding MPI conformance of requests.
* Eliminate the last argument of ompi_errhandler_request_invoke(); we
''always'' want to invoke the back-end exception handler with the
real error code.
* Make it clear in comments that we only invoke the ''first''
exception in a given array of requests, even if there's more than
one request with a non-MPI_SUCCESS value for MPI_ERROR.
* Defer the freeing of requests upon exception in the back-end
functions to MPI_WAIT* and MPI_TEST* until later; the requests are
kept so that we know what handler to invoke when we actually invoke
the exception. After figuring that out, ''then'' we free requests
with pending exceptions on them.
* Clean up return codes from the back-end MPI_TEST* and MPI_WAIT*
functions.
* Slightly modify ompi_errcode_get_mpi_code() to return unity if it
receives an MPI error code (vs. an OMPI error code).
This commit was SVN r12810.
The following Trac tickets were found above:
Ticket 659 --> https://svn.open-mpi.org/trac/ompi/ticket/659
usually is ok on little-endian systems, as the upper 32 bits will likely
be ignored, but on 32-bit big-endian systems, lval is complete junk.
Use ival if 32 bit mode, lval if 64.
Mixing of 32 and 64 bit architectures won't work without more changes.
This commit was SVN r12802.
* Do not add new procs to the global list during modex callback or
when sharing orte names during accept/connect. For modex, we
cache the modex info for later, in case that proc ever does get
added to the global proc list. For accept/connect orte name
exchange between the roots, we only need the orte name, so no
need to add a proc structure anyway. The procs will be added
to the global process list during the proc exchange later in
the wireup process
* Rename proc_get_namebuf and proc_get_proclist to proc_pack
and proc_unpack and extend them to include all information
needed to build that proc struct on a remote node (which
includes ORTE name, architecture, and hostname). Change
unpack to call pml_add_procs for the entire list of new
procs at once, rather than one at a time.
* Remove ompi_proc_find_and_add from the public proc
interface and make it a private function. This function
would add a half-created proc to the global proc list, so
making it harder to call is a good thing.
This means that there's only two ways to add new procs into the global proc list at this time: During MPI_INIT via the call to ompi_proc_init, where my job is added to the list and via ompi_proc_unpack using a buffer from a packed proc list sent to us by someone else. Currently, this is enough to implement MPI semantics. We can extend the interface more if we like, but that may require HNP communication to get the remote proc information and I wanted to avoid that if at all possible.
Refs trac:564
This commit was SVN r12798.
The following Trac tickets were found above:
Ticket 564 --> https://svn.open-mpi.org/trac/ompi/ticket/564
* don't load data into a buffer until we have the data, as
the data contains some header information needed to
properly load the data
This commit was SVN r12792.
Obviously, people like bproc will have to get the app_num via another avenue...but that's a problem for another day. Several options are easily available.
This commit was SVN r12788.
- we have to be able to attach a string to an error class, not just to an
error code
- according to MPI-2 the attribute MPI_LASTUSEDCODE has to be updated
everytime you add a new code or a new class. Thus, you have to have single
list for both.
Thus, we got rid of the error_class structure. In the error-code structure, we
can distinguish whether we are dealing with an error code or an error class by
looking at the err->code element of the structure. In case its value is
MPI_UNDEFINED, the according entry is a class, else it is an error code. All
predefined error codes have the code and the class field set to the same
value.
The test MPI_Add_error_class1 passes now.
Fixes trac:418
This commit was SVN r12764.
The following Trac tickets were found above:
Ticket 418 --> https://svn.open-mpi.org/trac/ompi/ticket/418
r12714) for supporting compilers / architectures with different
padding rules.
This commit was SVN r12749.
The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
r12491
r12714
hits the buffer on the other side. For this kind of BTLs we need to send
FIN through the same BTL, PUT was performed with so network will handle
ordering for us. If we will use another BTL, receiver can get FIN before
data will hit the buffer and complete request prematurely. We mark such
problematic BTLs with MCA_BTL_FLAGS_FAKE_RDMA flag (this kind of RDMA
is really fake, because the real one guaranties that sender will see the
completion only after receiver's NIC confirmed that all the data was
received).
This commit was SVN r12732.
It calls mca_pml_ob1_send_fin_btl() which may fail and doesn't check return
code. This breaks all RDMA transports event when only one BTL is used. Revert
it for now, I am working on a real fix for the problem (I hope).
This commit was SVN r12731.
The following SVN revision numbers were found above:
r12720 --> open-mpi/ompi@3e3689320b
regresion from v1.1 was reviewed and put to v1.2 branch. So revert this part
of r12721 back.
This commit was SVN r12730.
The following SVN revision numbers were found above:
r12433 --> open-mpi/ompi@82f7c0dd69
r12721 --> open-mpi/ompi@3edd850d2e
protocol when multiple NICS are available between 2 peers. The fix force
the FIN message to take exactly the same path as the fragment it describe
(i.e. same path means same BTL). Otherwise, the FIN can be received by
the peer before the RDMA complete and the request will get freed
too early.
This commit was SVN r12720.
- consistent error message when something fails (via BTL_ERROR macro)
- decrease the number of jumps.
- cleanup some parts of the code.
This commit was SVN r12719.
* Always invoke the error handler on MPI_COMM_WORLD for
invalid windows (except in win_create, which should
instead be on the given communicator).
* Allow get_errhandler in addition to set_errhandler
on MPI_WIN_NULL
Refs trac:647
This commit was SVN r12718.
The following Trac tickets were found above:
Ticket 647 --> https://svn.open-mpi.org/trac/ompi/ticket/647
The temporary solution is to switch into EV_NONBLOCK mode earlier (right after the mx_connect loop) so that there isn't a giant slowdown when processes enter the stage gate 2 barrier before other proesses. They will now not block in the event library for any period of time, which appears to have a 50% speedup when running at > 64 procs.
Refs trac:645
This commit was SVN r12713.
The following Trac tickets were found above:
Ticket 645 --> https://svn.open-mpi.org/trac/ompi/ticket/645
OMPI_ARRAY_INT_2_LOGICAL had an array bounds error - fixed this and the
analogous error in OMPI_ARRAY_LOGICAL_2_INT.
This commit was SVN r12712.
The following Trac tickets were found above:
Ticket 482 --> https://svn.open-mpi.org/trac/ompi/ticket/482
case where sizeof(INTEGER) > sizeof(int).
This commit was SVN r12707.
The following SVN revision numbers were found above:
r12684 --> open-mpi/ompi@e2c605f32a
so this isn't an issue there either. Refs trac:488
This commit was SVN r12675.
The following Trac tickets were found above:
Ticket 488 --> https://svn.open-mpi.org/trac/ompi/ticket/488
* Fix a counter roll-over issue that could result from a large (but
not excessive) number of outstanding put/get/accumulate calls
during a single synchronization issues (Refs trac:506)
* Fix epoch issue with rdma component that would effect PWSC
synchronization (Refs trac:507)
This commit was SVN r12673.
The following Trac tickets were found above:
Ticket 506 --> https://svn.open-mpi.org/trac/ompi/ticket/506
Ticket 507 --> https://svn.open-mpi.org/trac/ompi/ticket/507
* use one-sided datatype check instead of send/receive and check both
the origin and target datatypes
* allow error handler to be set on MPI_WIN_NULL, per standard
* Allow recursive calls into the pt2pt osc component's progress
function
* Fix an uninitialized variable problem in the unlock header
This commit was SVN r12667.
because they are in ORTE, not OMPI. Also, remove the ORTE_PROCESS_NAME macros
in iof base as they are duplicates of the ones that were in ns_types, which
meant that bad things happened if you changed what an orte_process_name_t
looked like.
This commit was SVN r12646.
the same time, remove some of the MPI-related options from OPAL:
- provide mechanism to change at runtime whether sched_yield() should
be called when the progress engine is idle
- provide mechanism for changing the rate at which the event engine
is called when there are "no" users of the event engine (ie, when
using MPI but not TCP)
- fix some function names in the progress engine to better match
their intended use (and remove MPI naming scheme)
- remove progress_mpi_enable / progress_mpi_disable because
we can now use the functions to set the sched_yield and
tick rate interfaces
- rename opal_progress_events() to opal_progress_set_event_flag()
because the first really isn't descriptive of what the function
does and I always got confused by it
This commit was SVN r12645.
Ensure that the new predefined MPI-2 attribute callback functions take
the proper types (INTEGER, kind=MPI_ADDRESS_KIND instead of just
INTEGER).
This commit was SVN r12639.
The following Trac tickets were found above:
Ticket 624 --> https://svn.open-mpi.org/trac/ompi/ticket/624
Accordingly, there are new APIs to the name service to support the ability to get a job's parent, root, immediate children, and all its descendants. In addition, the terminate_job, terminate_orted, and signal_job APIs for the PLS have been modified to accept attributes that define the extent of their actions. For example, doing a "terminate_job" with an attribute of ORTE_NS_INCLUDE_DESCENDANTS will terminate the given jobid AND all jobs that descended from it.
I have tested this capability on a MacBook under rsh, Odin under SLURM, and LANL's Flash (bproc). It worked successfully on non-MPI jobs (both simple and including a spawn), and MPI jobs (again, both simple and with a spawn).
This commit was SVN r12597.
Same sort of problem and fix as described in r12323 - mca_pml_ob1_recv_frag_progress() was segfaulting due to a NULL req_proc pointer. The path leading to this was through the mca_pml_ob1_check_cantmatch_for_match() function, where we can match a frag using the same macros as mca_pml_ob1_frag_match() and never initialize the req_proc pointer.
This commit was SVN r12582.
The following SVN revision numbers were found above:
r12323 --> open-mpi/ompi@c752502dee