to happen
* Properly error out (rather than cause buffer overflow) in case where
the datatype packed description is larger than our control fragments.
This still isn't standards conforming, but at least we know what
happened.
* Expose win_set_name to external libraries (like the osc modules)
* Set default window name to the CID of the communcator it's using
for communication
Refs trac:1905
This commit was SVN r21134.
The following Trac tickets were found above:
Ticket 1905 --> https://svn.open-mpi.org/trac/ompi/ticket/1905
After much work by Jeff and myself, and quite a lot of discussion, it has become clear that we simply cannot resolve the infinite loops caused by RML-involved subsystems calling orte_output. The original rationale for the change to orte_output has also been reduced by shifting the output of XML-formatted vs human readable messages to an alternative approach.
I have globally replaced the orte_output/ORTE_OUTPUT calls in the code base, as well as the corresponding .h file name. I have test compiled and run this on the various environments within my reach, so hopefully this will prove minimally disruptive.
This commit was SVN r18619.
such, the commit message back to the master SVN repository is fairly
long.
= ORTE Job-Level Output Messages =
Add two new interfaces that should be used for all new code throughout
the ORTE and OMPI layers (we already make the search-and-replace on
the existing ORTE / OMPI layers):
* orte_output(): (and corresponding friends ORTE_OUTPUT,
orte_output_verbose, etc.) This function sends the output directly
to the HNP for processing as part of a job-specific output
channel. It supports all the same outputs as opal_output()
(syslog, file, stdout, stderr), but for stdout/stderr, the output
is sent to the HNP for processing and output. More on this below.
* orte_show_help(): This function is a drop-in-replacement for
opal_show_help(), with two differences in functionality:
1. the rendered text help message output is sent to the HNP for
display (rather than outputting directly into the process' stderr
stream)
1. the HNP detects duplicate help messages and does not display them
(so that you don't see the same error message N times, once from
each of your N MPI processes); instead, it counts "new" instances
of the help message and displays a message every ~5 seconds when
there are new ones ("I got X new copies of the help message...")
opal_show_help and opal_output still exist, but they only output in
the current process. The intent for the new orte_* functions is that
they can apply job-level intelligence to the output. As such, we
recommend that all new ORTE and OMPI code use the new orte_*
functions, not thei opal_* functions.
=== New code ===
For ORTE and OMPI programmers, here's what you need to do differently
in new code:
* Do not include opal/util/show_help.h or opal/util/output.h.
Instead, include orte/util/output.h (this one header file has
declarations for both the orte_output() series of functions and
orte_show_help()).
* Effectively s/opal_output/orte_output/gi throughout your code.
Note that orte_output_open() takes a slightly different argument
list (as a way to pass data to the filtering stream -- see below),
so you if explicitly call opal_output_open(), you'll need to
slightly adapt to the new signature of orte_output_open().
* Literally s/opal_show_help/orte_show_help/. The function signature
is identical.
=== Notes ===
* orte_output'ing to stream 0 will do similar to what
opal_output'ing did, so leaving a hard-coded "0" as the first
argument is safe.
* For systems that do not use ORTE's RML or the HNP, the effect of
orte_output_* and orte_show_help will be identical to their opal
counterparts (the additional information passed to
orte_output_open() will be lost!). Indeed, the orte_* functions
simply become trivial wrappers to their opal_* counterparts. Note
that we have not tested this; the code is simple but it is quite
possible that we mucked something up.
= Filter Framework =
Messages sent view the new orte_* functions described above and
messages output via the IOF on the HNP will now optionally be passed
through a new "filter" framework before being output to
stdout/stderr. The "filter" OPAL MCA framework is intended to allow
preprocessing to messages before they are sent to their final
destinations. The first component that was written in the filter
framework was to create an XML stream, segregating all the messages
into different XML tags, etc. This will allow 3rd party tools to read
the stdout/stderr from the HNP and be able to know exactly what each
text message is (e.g., a help message, another OMPI infrastructure
message, stdout from the user process, stderr from the user process,
etc.).
Filtering is not active by default. Filter components must be
specifically requested, such as:
{{{
$ mpirun --mca filter xml ...
}}}
There can only be one filter component active.
= New MCA Parameters =
The new functionality described above introduces two new MCA
parameters:
* '''orte_base_help_aggregate''': Defaults to 1 (true), meaning that
help messages will be aggregated, as described above. If set to 0,
all help messages will be displayed, even if they are duplicates
(i.e., the original behavior).
* '''orte_base_show_output_recursions''': An MCA parameter to help
debug one of the known issues, described below. It is likely that
this MCA parameter will disappear before v1.3 final.
= Known Issues =
* The XML filter component is not complete. The current output from
this component is preliminary and not real XML. A bit more work
needs to be done to configure.m4 search for an appropriate XML
library/link it in/use it at run time.
* There are possible recursion loops in the orte_output() and
orte_show_help() functions -- e.g., if RML send calls orte_output()
or orte_show_help(). We have some ideas how to fix these, but
figured that it was ok to commit before feature freeze with known
issues. The code currently contains sub-optimal workarounds so
that this will not be a problem, but it would be good to actually
solve the problem rather than have hackish workarounds before v1.3 final.
This commit was SVN r18434.
used at nce (up to one unique collective module per collective function).
Matches r15795:15921 of the tmp/bwb-coll-select branch
This commit was SVN r15924.
The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
r15795
r15921
a WIN_FREE
* Fix race condition in threaded builds with pending unlocks and
finishing an epoch
* Fix memory leak due to use of OBJ_DESTRUCT instead of OBJ_RELEASE
* Fix race condition between releasing multiple shared locks and
starting a new lock
* Need to incremement the shared count if starting a new shared
lock once an exclusive lock finishes
This commit was SVN r15185.
OBJ_NEW
* Need to single when the passive unlock has left an expose epoch for
the win_free case
* Clean up some debugging output
* fix missing variable initialization
This commit was SVN r15167.
to do while(...) { } then we can't change the variables in the ...
atomically, but should do it while holding the module lock.
* Fix dumb communicator creation error when we don't create the progress
stuff (because a window already exists), where we would accidently
jump to the error case.
This commit was SVN r14715.
* Combine polling of the long requests and buffer requests into
one type, and in one place
* Associate the list of requests to poll with the component, not
the individual modules
* add progress thread that sits on the OMPI request structure
and wakes up at the appropriate time to poll the message
list. Not the best, but without some asynch notification
from the PML that a given set of requests has completed, there
isn't much better
* Instead of calling opal_progress() all over the place, move
to using the condition variables like the rest of the project.
Has the advantage of moving it slightly futher along in the
becoming thread safe thing
* Fix a problem with the passive side of unlock where it could
go recursive and cause all kinds of problems, especially
when progress threads are used. Instead, have two parts of
passive unlock -- one to start the unlock, and another to
complete the lock and send the ack back. The data moving
code trips the second at the right time.
This commit was SVN r14703.
it kills ROMIO and one-sided performance when using only TCP. The problem
is that it only allows those two to be progressed every couple of seconds,
leading to what looks like hangs in the one-sided tests (and the ROMIO stuff,
although people seem to not notice that at this point).
This commit was SVN r14142.
The following SVN revision numbers were found above:
r14073 --> open-mpi/ompi@64fbbc20b8
so there's no longer a race there (I used to do the unlock request last, after local completion of all the
requests completed, to try to avoid having the passive side reply to the active side, but I don't do that
anymore). The unlock side will not "unlock" the window until it actually receives the correct number of results,
so we're good there.
This fixes an issue where we would receive data on the remote side we weren't expecting that could cause
us to release a lock before it really should have been released to the requesting peer. It could also
cause a deadlock if one of the processes trying to unlock was "self", as that would result in the active
unlock never sending the unlock request, even though it sent the payload, which could cause a counter
that should always be positive to hit -1, causing an infinite loop that could only be solved by
popping up the stack, which was an impossibility.
Refs trac:785
This commit was SVN r13160.
The following Trac tickets were found above:
Ticket 785 --> https://svn.open-mpi.org/trac/ompi/ticket/785
Otherwise, we end up recursively calling into the progress functions
and corrupting a list that doesn't like to be corrupted.
Refs trac:561
This commit was SVN r13138.
The following Trac tickets were found above:
Ticket 561 --> https://svn.open-mpi.org/trac/ompi/ticket/561
side and only let MPI_WIN_UNLOCK return when the passive side has actively
replied that the window is unlocked.
Refs trac:761
This commit was SVN r13118.
The following Trac tickets were found above:
Ticket 761 --> https://svn.open-mpi.org/trac/ompi/ticket/761
* Fix a counter roll-over issue that could result from a large (but
not excessive) number of outstanding put/get/accumulate calls
during a single synchronization issues (Refs trac:506)
* Fix epoch issue with rdma component that would effect PWSC
synchronization (Refs trac:507)
This commit was SVN r12673.
The following Trac tickets were found above:
Ticket 506 --> https://svn.open-mpi.org/trac/ompi/ticket/506
Ticket 507 --> https://svn.open-mpi.org/trac/ompi/ticket/507
* use one-sided datatype check instead of send/receive and check both
the origin and target datatypes
* allow error handler to be set on MPI_WIN_NULL, per standard
* Allow recursive calls into the pt2pt osc component's progress
function
* Fix an uninitialized variable problem in the unlock header
This commit was SVN r12667.
where a window was in both the passive and active side of a lock sequence.
Refs trac:488
This commit was SVN r12112.
The following Trac tickets were found above:
Ticket 488 --> https://svn.open-mpi.org/trac/ompi/ticket/488
tell if the remote proc should be in an exposure epoch or not.
Refs trac:325
This commit was SVN r11746.
The following Trac tickets were found above:
Ticket 325 --> https://svn.open-mpi.org/trac/ompi/ticket/325
epoch's control data could overwrite the previous epoch's data because
we were reusing data structures between PW and SC. Instead, we now
have explicit post_msg and complete_msg counters for completion.
refs trac:354
* Only register the rdma osc callback once, as it turns out that some
btls (MX) do somethng more than update a table during the register
call, and each register call sucks up valuable fragments...
This commit was SVN r11745.
The following Trac tickets were found above:
Ticket 354 --> https://svn.open-mpi.org/trac/ompi/ticket/354
long ago) supposed to be used as a cache for accessing the PML procs. But in
all of the PMLs the PML proc contain only one field i.e. a pointer to the ompi_proc.
This pointer can be accessed using the c_remote_group easily. Therefore, there is no
meaning of keeping the PML procs around. Slim fast commit ...
This commit was SVN r11730.
can be in both a Post and Start state. Also, the asserts were only
correct assuming that we were never in the post and start state at the
same time, which was obviously silly.
refs trac:303
This commit was SVN r11428.
The following Trac tickets were found above:
Ticket 303 --> https://svn.open-mpi.org/trac/ompi/ticket/303
implemented entirely on top of the PML. This allows us to have a
one-sided interface even when we are using the CM PML and MTLs for
point-to-point transport (and therefore not using the BML/BTLs)
* Old pt2pt component was renamed "rdma", as it will soon be having
real RDMA support added to it.
Work was done in a temporary branch. Commit is the result of the
merge command:
svn merge -r10862:11099 https://svn.open-mpi.org/svn/ompi/tmp/bwb-osc-pt2pt
This commit was SVN r11100.
The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
r10862
r11099
default
* Add ability to start Put and Get requests immediately instead of queuing
until synchronizaion when using Fence. Not entirely sure this is
completely safe, so it must be explicitly enabled by the user, either with
an MCA parameter or info argument to Win_create.
This commit was SVN r9418.
you can't be in a post and a start at the same time, and that is clearly
legal to do
* Fix interptretation of when the epochs start for MPI_Fence. Only start
an epoch if communication actually occurs, otherwise it isn't actually
an epoch. I don't know who thought that wording in the MPI standard
was a good idea, but can't change it now...
This commit was SVN r9139.
for a Get request and the reply came in before the local completion
callback was fired from the btl.
* Silence some more debugging output for the moment
This commit was SVN r9130.
* fix a silly bug where we weren't adding a long accumulate message to
the pending long messages list, so we hung if a long accumulate
occurred. Still having some memory issues on one of the tests I'm
running - need to move over to Linux and Valgrind after some sleep.
This commit was SVN r9108.
what is going on
* Fix a dumb error where if a lock was released and another was pending, I
would send a lock *request* to the queued process, rather than giving
him the lock and sending him a lock *ack*. Turns out requests and
acks are different things :)
This commit was SVN r9103.
* Implement win_lock and win_unlock in the pt2pt component. Not well
tested, but appears to move bits if properly motivated...
This commit was SVN r8922.
* rework the thread locking so that it at least makes sense to me. Still
need to do a bunch of testing before I'm happy with it, but it's a tad
bit closer...
This commit was SVN r8918.
* Implement fortran handle -> c handle tracking
* Remove some unneeded locking around free lists (the free list
macros do their own locking)
* Try to be a bit more memory friendly with the w_mode setting /
checking
This commit was SVN r8865.
a subset of win's group, so this should never happen. But users have
been known to screw up before, so return a reasonable error.
This commit was SVN r8855.
complete, but stable enough that it will have no impact on general development,
so into the trunk it goes. Changes in this commit include:
- Remove the --with option for disabling MPI-2 onesided support. It
complicated code, and has no real reason for existing
- add a framework osc (OneSided Communication) for encapsulating
all the MPI-2 onesided functionality
- Modify the MPI interface functions for the MPI-2 onesided chapter
to properly call the underlying framework and do the required
error checking
- Created an osc component pt2pt, which is layered over the BML/BTL
for communication (although it also uses the PML for long message
transfers). Currently, all support functions, all communication
functions (Put, Get, Accumulate), and the Fence synchronization
function are implemented. The PWSC active synchronization
functions and Lock/Unlock passive synchronization functions are
still not implemented
This commit was SVN r8836.