1
1
Граф коммитов

139 Коммитов

Автор SHA1 Сообщение Дата
Galen Shipman
62ade993ca Seperate finalize and close for the PML, this gives the PML a chance to complete any outstanding operations prior to close. Before this change we just called pml_finalize in pml_close which causes problems if there are outstanding events that a BTL/MTL needs to progress during finalize. The problem is that MPI_COMM_WORLD and others were destroyed prior to closing the PML, pml_close would call pml_finalize, events would progress in the BTL, and these events expected MPI_COMM_WORLD to still be around..
This commit was SVN r16405.
2007-10-09 15:28:56 +00:00
Ralph Castain
54b2cf747e These changes were mostly captured in a prior RFC (except for #2 below) and are aimed specifically at improving startup performance and setting up the remaining modifications described in that RFC.
The commit has been tested for C/R and Cray operations, and on Odin (SLURM, rsh) and RoadRunner (TM). I tried to update all environments, but obviously could not test them. I know that Windows needs some work, and have highlighted what is know to be needed in the odls process component.

This represents a lot of work by Brian, Tim P, Josh, and myself, with much advice from Jeff and others. For posterity, I have appended a copy of the email describing the work that was done:

As we have repeatedly noted, the modex operation in MPI_Init is the single greatest consumer of time during startup. To-date, we have executed that operation as an ORTE stage gate that held the process until a startup message containing all required modex (and OOB contact info - see #3 below) info could be sent to it. Each process would send its data to the HNP's registry, which assembled and sent the message when all processes had reported in.

In addition, ORTE had taken responsibility for monitoring process status as it progressed through a series of "stage gates". The process reported its status at each gate, and ORTE would then send a "release" message once all procs had reported in.

The incoming changes revamp these procedures in three ways:

1. eliminating the ORTE stage gate system and cleanly delineating responsibility between the OMPI and ORTE layers for MPI init/finalize. The modex stage gate (STG1) has been replaced by a collective operation in the modex itself that performs an allgather on the required modex info. The allgather is implemented using the orte_grpcomm framework since the BTL's are not active at that point. At the moment, the grpcomm framework only has a "basic" component analogous to OMPI's "basic" coll framework - I would recommend that the MPI team create additional, more advanced components to improve performance of this step.

The other stage gates have been replaced by orte_grpcomm barrier functions. We tried to use MPI barriers instead (since the BTL's are active at that point), but - as we discussed on the telecon - these are not currently true barriers so the job would hang when we fell through while messages were still in process. Note that the grpcomm barrier doesn't actually resolve that problem, but Brian has pointed out that we are unlikely to ever see it violated. Again, you might want to spend a little time on an advanced barrier algorithm as the one in "basic" is very simplistic.

Summarizing this change: ORTE no longer tracks process state nor has direct responsibility for synchronizing jobs. This is now done via collective operations within the MPI layer, albeit using ORTE collective communication services. I -strongly- urge the MPI team to implement advanced collective algorithms to improve the performance of this critical procedure.


2. reducing the volume of data exchanged during modex. Data in the modex consisted of the process name, the name of the node where that process is located (expressed as a string), plus a string representation of all contact info. The nodename was required in order for the modex to determine if the process was local or not - in addition, some people like to have it to print pretty error messages when a connection failed.

The size of this data has been reduced in three ways:

(a) reducing the size of the process name itself. The process name consisted of two 32-bit fields for the jobid and vpid. This is far larger than any current system, or system likely to exist in the near future, can support. Accordingly, the default size of these fields has been reduced to 16-bits, which means you can have 32k procs in each of 32k jobs. Since the daemons must have a vpid, and we require one daemon/node, this also restricts the default configuration to 32k nodes.

To support any future "mega-clusters", a configuration option --enable-jumbo-apps has been added. This option increases the jobid and vpid field sizes to 32-bits. Someday, if necessary, someone can add yet another option to increase them to 64-bits, I suppose.

(b) replacing the string nodename with an integer nodeid. Since we have one daemon/node, the nodeid corresponds to the local daemon's vpid. This replaces an often lengthy string with only 2 (or at most 4) bytes, a substantial reduction.

(c) when the mca param requesting that nodenames be sent to support pretty error messages, a second mca param is now used to request FQDN - otherwise, the domain name is stripped (by default) from the message to save space. If someone wants to combine those into a single param somehow (perhaps with an argument?), they are welcome to do so - I didn't want to alter what people are already using.

While these may seem like small savings, they actually amount to a significant impact when aggregated across the entire modex operation. Since every proc must receive the modex data regardless of the collective used to send it, just reducing the size of the process name removes nearly 400MBytes of communication from a 32k proc job (admittedly, much of this comm may occur in parallel). So it does add up pretty quickly.


3. routing RML messages to reduce connections. The default messaging system remains point-to-point - i.e., each proc opens a socket to every proc it communicates with and sends its messages directly. A new option uses the orteds as routers - i.e., each proc only opens a single socket to its local orted. All messages are sent from the proc to the orted, which forwards the message to the orted on the node where the intended recipient proc is located - that orted then forwards the message to its local proc (the recipient). This greatly reduces the connection storm we have encountered during startup.

It also has the benefit of removing the sharing of every proc's OOB contact with every other proc. The orted routing tables are populated during launch since every orted gets a map of where every proc is being placed. Each proc, therefore, only needs to know the contact info for its local daemon, which is passed in via the environment when the proc is fork/exec'd by the daemon. This alone removes ~50 bytes/process of communication that was in the current STG1 startup message - so for our 32k proc job, this saves us roughly 32k*50 = 1.6MBytes sent to 32k procs = 51GBytes of messaging.

Note that you can use the new routing method by specifying -mca routed tree - if you so desire. This mode will become the default at some point in the future.


There are a few minor additional changes in the commit that I'll just note in passing:

* propagation of command line mca params to the orteds - fixes ticket #1073. See note there for details.

* requiring of "finalize" prior to "exit" for MPI procs - fixes ticket #1144. See note there for details.

* cleanup of some stale header files

This commit was SVN r16364.
2007-10-05 19:48:23 +00:00
Tim Prins
4033a40e4e Coding standards...
This commit was SVN r16118.
2007-09-13 14:00:59 +00:00
George Bosilca
7b3dcff267 Coverty: Limit the strcpy to the maximum length of the destination.
This commit was SVN r16107.
2007-09-12 18:03:53 +00:00
Jeff Squyres
ad784a9ab0 Make "simultaneous" be a size_t; there's already a check to ensure
that it is >= 1, so making it a size_t makes it easier to interact
with all the other size_t variables and removes a compiler warning.

This commit was SVN r15935.
2007-08-20 13:22:46 +00:00
Brian Barrett
af4e86c25f Update collectives selection logic to allow for multiple components to be
used at nce (up to one unique collective module per collective function).
Matches r15795:15921 of the tmp/bwb-coll-select branch

This commit was SVN r15924.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r15795
  r15921
2007-08-19 03:37:49 +00:00
Rolf vandeVaart
797078115d Fix the case where mpi_preconnect_oob=1 and
mpi_preconnect_oob_simultaneous > np.  Need to scale back
simultaneous to equal np in those cases.  Reviewed by Brian.

This commit fixes trac:1064.

This commit was SVN r15916.

The following Trac tickets were found above:
  Ticket 1064 --> https://svn.open-mpi.org/trac/ompi/ticket/1064
2007-08-17 20:18:42 +00:00
Brian Barrett
d166a2bb6d Change requested by Ralph -- Remove the dependency on GPR triggers for filling
in the OMPI proc structures.  For now, use an extension of the modex that is
keyed on strings.  Eventually, this should use the attribute put/get that is
part of the RSL interface.

This commit was SVN r15820.
2007-08-09 18:53:28 +00:00
Mohamad Chaarawi
59a7bf8a9f Merging in the Sparse Groups..
This commit includes config changes..

This commit was SVN r15764.
2007-08-04 00:41:26 +00:00
George Bosilca
8baeadb761 The PTLs are now long gone ...
This commit was SVN r15763.
2007-08-04 00:37:52 +00:00
Jeff Squyres
d3f008492f Introduce a new debugging MCA parameter:
mpi_show_mpi_alloc_mem_leaks

When activated, MPI_FINALIZE displays a list of memory allocations
from MPI_ALLOC_MEM that were not freed by MPI_FREE_MEM (in each MPI
process).

 * If set to a positive integer, display only that many leaks.
 * If set to a negative integer, display all leaks.
 * If set to 0, do not show any leaks.

This commit was SVN r15736.
2007-08-01 21:33:25 +00:00
Brian Barrett
58ee6b4e35 More documentation. Yay?
This commit was SVN r15622.
2007-07-25 21:01:10 +00:00
Brian Barrett
5b9fa7e998 reapply r15517 and r15520, which were removed in r15527 so that I could get
the RML/OOB merge in slightly easier

This commit was SVN r15530.

The following SVN revision numbers were found above:
  r15517 --> open-mpi/ompi@41977fcc95
  r15520 --> open-mpi/ompi@9cbc9df1b8
  r15527 --> open-mpi/ompi@2d17dd9516
2007-07-20 02:34:29 +00:00
Brian Barrett
39a6057fc6 A number of improvements / changes to the RML/OOB layers:
* General TCP cleanup for OPAL / ORTE
  * Simplifying the OOB by moving much of the logic into the RML
  * Allowing the OOB RML component to do routing of messages
  * Adding a component framework for handling routing tables
  * Moving the xcast functionality from the OOB base to its own framework

Includes merge from tmp/bwb-oob-rml-merge revisions:

    r15506, r15507, r15508, r15510, r15511, r15512, r15513

This commit was SVN r15528.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r15506
  r15507
  r15508
  r15510
  r15511
  r15512
  r15513
2007-07-20 01:34:02 +00:00
Brian Barrett
2d17dd9516 temporarily back our r15517 and 15520 so that I can get the RML / OOB changes
to cleanly apply

This commit was SVN r15527.

The following SVN revision numbers were found above:
  r15517 --> open-mpi/ompi@41977fcc95
2007-07-20 01:10:34 +00:00
Ralph Castain
41977fcc95 Remove the cellid field from the orte_process_name_t structure. This only affects a handful of files in itself, but...
Cleanup ALL instances of output involving the printing of orte_process_name_t structures using the ORTE_NAME_ARGS macro so that the number of fields and type of data match. Replace those values with a new macro/function pair ORTE_NAME_PRINT that outputs a string (using the new thread safe data capability) so that any future changes to the printing of those structures can be accomplished with a change to a single point.

Note that I could not possibly find outputs that directly print the orte_process_name_t fields, but only dealt with those that used ORTE_NAME_ARGS. Hence, you may still have a few outputs that bark during compilation. Also, I could only verify those that fall within environments I can compile on, so other environments may yield some minor warnings.

This commit was SVN r15517.
2007-07-19 20:56:46 +00:00
Jeff Squyres
3bc940ac27 Fix three things from r15474 (thanks to Brian for noticing):
* bml.h had a change that introduced a variable named "_order" to
   avoid a conflict with a local variable.  The namespace starting
   with _ belongs to the os/compiler/kernel/not us.  So we can't start
   symbols with _.  So I replaced it with arg_order, and also updated
   the threaded equivalent of the macro that was modified.
 * in btl_openib_proc.c, one opal_output accidentally had its string
   reverted from "ompi_modex_recv..." to
   "mca_pml_base_modex_recv....".  This was fixed.
 * The change to ompi/runtime/ompi_preconnect.c was entirely
   reverted; it was an artifact of debugging.

This commit was SVN r15475.

The following SVN revision numbers were found above:
  r15474 --> open-mpi/ompi@8ace07efed
2007-07-18 11:38:06 +00:00
Jeff Squyres
8ace07efed This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
   BTL.
1. Pasha's new implementation of asychronous HCA event handling.

Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.  

Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes).  :-(

== Fine-grain control of queue pair resources ==

Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).

Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments.  When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers.  One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.

The new design allows multiple QPs to be specified at runtime.  Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified.  The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:

{{{
-mca btl_openib_receive_queues \
     "P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}

Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma).  The above
example therefore describes 4 QPs.

The first QP is:

    P,128,16,4

Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes).  The third field indicates the number of receive buffers
to allocate to the QP (16).  The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).

The second QP is:

    S,1024,256,128,32

Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP.  The second, third and fourth fields are the same as in the
per-peer based QP.  The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32).  This provides a
"good enough" mechanism of flow control for some regular communication
patterns.

QPs MUST be specified in ascending receive buffer size order.  This
requirement may be removed prior to 1.3 release.

This commit was SVN r15474.
2007-07-18 01:15:59 +00:00
Rich Graham
0991c3d5f5 move buffered send component clean up out of the pml to ompi_mpi_finalize.
This commit was SVN r15463.
2007-07-17 14:50:52 +00:00
Rich Graham
de5670cd79 add missing header file - Thanks Brian.
This commit was SVN r15455.
2007-07-17 00:06:35 +00:00
Rich Graham
1a4ce2a961 move setting of the component used to managed buffer sends out of the
pmls, and into ompi_mpi_init.  This is the first of several steps to pull
buffered send management out of the pmls.

This commit was SVN r15451.
2007-07-16 21:52:25 +00:00
Sven Stork
804f3bee41 - export symbols that are required for the fortran bindings
This commit was SVN r15439.
2007-07-16 13:23:57 +00:00
George Bosilca
b9db0a4c2d Remove a warning:
ompi-trunk/ompi/runtime/ompi_mpi_init.c:221: warning: `cmd_buffer' might be used uninitialized in this function

This commit was SVN r15397.
2007-07-13 06:20:44 +00:00
Ralph Castain
bd65f8ba88 Bring in an updated launch system for the orteds. This commit restores the ability to execute singletons and singleton comm_spawn, both in single node and multi-node environments.
Short description: major changes include -

1. singletons now fork/exec a local daemon to manage their operations.

2. the orte daemon code now resides in libopen-rte

3. daemons no longer use the orte triggering system during startup. Instead, they directly call back to their parent pls component to report ready to operate. A base function to count the callbacks has been provided.

I have modified all the pls components except xcpu and poe (don't understand either well enough to do it). Full functionality has been verified for rsh, SLURM, and TM systems. Compile has been verified for xgrid and gridengine.

This commit was SVN r15390.
2007-07-12 19:53:18 +00:00
Tim Prins
5b815ec94b fix deadlock in new modex code
This commit was SVN r15326.
2007-07-10 13:28:44 +00:00
Brian Barrett
8b9e8054fd Move modex from pml base to general ompi runtime, sicne it's used by more
than just the PML/BTLs these days.  Also clean up the code so that it
handles the situation where not all nodes register information for a given
node (rather than just spinning until that node sends information, like
we do today).

Includes r15234 and r15265 from the /tmp/bwb-modex branch.

This commit was SVN r15310.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r15234
  r15265
2007-07-09 17:16:34 +00:00
Bill D'Amico
9b5f73976d Bring Portable Linux Processor Affinity into trunk.
Changes paffinity interface to use a cpu mask for available/preferred cpus
rather than the current coarse grained paffinity that lets the OS choose
which processor.

Macros for setting and clearing masks are provided.

Solaris and windows changes have not been made. Solaris subdirectory has some
suggested changes - however the relevant man pages for the Solaris 10 APIs
have some ambiguity regarding order in which one create and sets a processor
set. As we did not have access to a solaris 10 machine we could not test to
see the correct way to do the work under solaris.

This commit was SVN r14887.
2007-06-05 22:07:30 +00:00
Josh Hursey
1e678c3f55 per conversation with Ralph and Jeff take out the opal_init_only logic.
This commit moves the initalization/finalization of opal_event and opal_progress
to opal_init/finalize. These were previously init/final in ORTE which is an
abstraction violation. After talking about it we concluded that there are no
ordering issues that require these to be init/final in ORTE instead of OPAL.

I ran the IBM test suite against this commit and it didn't turn up any new
failures so I think it is good to go.

Let us know if this causes problems.

This commit was SVN r14773.
2007-05-24 21:54:58 +00:00
Brian Barrett
5f15becf4e Allow multiple connections to be started simultaneously when doing the OOB
wireup.  For small clusters or clusters with decent ARP lookup and
connect times, this will have marginal impact.  For systems with either
bad ARP lookup times or long connect times, increasing this number
to something much closer to SOMAXCONN (128 on most modern machines) will
result in a faster OOB wireup.  Don't set higher than SOMAXCONN or you
can end up with lots of connect() retries and we'll end up slower.

This commit was SVN r14742.
2007-05-23 21:35:44 +00:00
Ralph Castain
4fff584a68 Commit the orted-failed-to-start code. This correctly causes the system to detect the failure of an orted to start and allows the system to terminate all procs/orteds that *did* start.
The primary change that underlies all this is in the OOB. Specifically, the problem in the code until now has been that the OOB attempts to resolve an address when we call the "send" to an unknown recipient. The OOB would then wait forever if that recipient never actually started (and hence, never reported back its OOB contact info). In the case of an orted that failed to start, we would correctly detect that the orted hadn't started, but then we would attempt to order all orteds (including the one that failed to start) to die. This would cause the OOB to "hang" the system.

Unfortunately, revising how the OOB resolves addresses introduced a number of additional problems. Specifically, and most troublesome, was the fact that comm_spawn involved the immediate transmission of the rendezvous point from parent-to-child after the child was spawned. The current code used the OOB address resolution as a "barrier" - basically, the parent would attempt to send the info to the child, and then "hold" there until the child's contact info had arrived (meaning the child had started) and the send could be completed.

Note that this also caused comm_spawn to "hang" the entire system if the child never started... The app-failed-to-start helped improve that behavior - this code provides additional relief.

With this change, the OOB will return an ADDRESSEE_UNKNOWN error if you attempt to send to a recipient whose contact info isn't already in the OOB's hash tables. To resolve comm_spawn issues, we also now force the cross-sharing of connection info between parent and child jobs during spawn.

Finally, to aid in setting triggers to the right values, we introduce the "arith" API for the GPR. This function allows you to atomically change the value in a registry location (either divide, multiply, add, or subtract) by the provided operand. It is equivalent to first fetching the value using a "get", then modifying it, and then putting the result back into the registry via a "put".

This commit was SVN r14711.
2007-05-21 18:31:28 +00:00
Sven Stork
22af6d38e6 - UNexport symbols that shouldn't be needed outside the libraries
- replace #if/#endif with BEGIN/END_C_DECLS
- reformating

This commit was SVN r14669.
2007-05-16 15:46:52 +00:00
Brian Barrett
a25ce44dc1 Clean up the preconnect code:
* Don't need the 2 process case -- we'll send an extra message, but
    at very little cost and less code is better.
  * Use COMPLETE sends instead of STANDARD sends so that the connection
    is fully established before we move on to the next connection.  The
    previous code was still causing minor connection flooding for huge
    numbers of processes.
  * mpi_preconnect_all now connects both OOB and MPI layers.  There's
    also mpi_preconnect_mpi and mpi_preconnect_oob should you want to
    be more specific.
  * Since we're only using the MCA parameters once at the beginning
    of time, no need for global constants.  Just do the quick param
    lookup right before the parameter is needed.  Save some of that
    global variable space for the next guy.

Fixes trac:963

This commit was SVN r14553.

The following Trac tickets were found above:
  Ticket 963 --> https://svn.open-mpi.org/trac/ompi/ticket/963
2007-05-01 04:49:36 +00:00
Ralph Castain
18b2dca51c Bring in the code for routing xcast stage gate messages via the local orteds. This code is inactive unless you specifically request it via an mca param oob_xcast_mode (can be set to "linear" or "direct"). Direct mode is the old standard method where we send messages directly to each MPI process. Linear mode sends the xcast message via the orteds, with the HNP sending the message to each orted directly.
There is a binomial algorithm in the code (i.e., the HNP would send to a subset of the orteds, which then relay it on according to the typical log-2 algo), but that has a bug in it so the code won't let you select it even if you tried (and the mca param doesn't show, so you'd *really* have to try).

This also involved a slight change to the oob.xcast API, so propagated that as required.

Note: this has *only* been tested on rsh, SLURM, and Bproc environments (now that it has been transferred to the OMPI trunk, I'll need to re-test it [only done rsh so far]). It should work fine on any environment that uses the ORTE daemons - anywhere else, you are on your own... :-)

Also, correct a mistake where the orte_debug_flag was declared an int, but the mca param was set as a bool. Move the storage for that flag to the orte/runtime/params.c and orte/runtime/params.h files appropriately.

This commit was SVN r14475.
2007-04-23 18:41:04 +00:00
Josh Hursey
8f119d9063 Closes trac:977
Fix for memory corruption in the restarted process stack. This stemed from 
the brute force method we were previously using. This commit fixes this by
using a lighter weight solution focused in the r2 BML instead of above the PML.
This is a more efficient and flexible solution, and it solves the original
problem.

In the process I pulled out the ft_event function in the tcp BTL and r2 BML
into a set of *_ft.[c|h] files just to keep any updates to these code paths
as isolated as possible to make merging easier on everyone.

This commit was SVN r14371.

The following SVN revision numbers were found above:
  r2 --> open-mpi/ompi@58fdc18855

The following Trac tickets were found above:
  Ticket 977 --> https://svn.open-mpi.org/trac/ompi/ticket/977
2007-04-14 02:06:05 +00:00
Rich Graham
f481722bdf move the code that sets the thread level information before the btl are
initialized, so that the btl's have this information for correct setup.

This commit was SVN r14258.
2007-04-07 05:06:47 +00:00
George Bosilca
cb93b1d40d Deal with compiler warnings and size_t in same time ... It's getting more
and more tricky !!!

This commit was SVN r14162.
2007-03-28 22:02:13 +00:00
George Bosilca
4bc69447b4 Setting a size_t to -1 leads to unexpected results ...
This commit was SVN r14160.
2007-03-28 18:23:42 +00:00
Shiqing Fan
fb50a72e92 Unnecessary header removed.
This commit was SVN r14152.
2007-03-27 14:32:30 +00:00
Shiqing Fan
91cfb2f149 A few mismatched declearations are fixed, and several header files are added for Cygwin...
This commit was SVN r14151.
2007-03-27 14:17:25 +00:00
Josh Hursey
7c4ca3c420 remove some stale code
This commit was SVN r14134.
2007-03-23 14:11:12 +00:00
Josh Hursey
dadca7da88 Merging in the jjhursey-ft-cr-stable branch (r13912 : HEAD).
This merge adds Checkpoint/Restart support to Open MPI. The initial
frameworks and components support a LAM/MPI-like implementation.

This commit follows the risk assessment presented to the Open MPI core
development group on Feb. 22, 2007.

This commit closes trac:158

More details to follow.

This commit was SVN r14051.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r13912

The following Trac tickets were found above:
  Ticket 158 --> https://svn.open-mpi.org/trac/ompi/ticket/158
2007-03-16 23:11:45 +00:00
Brian Barrett
211ed6e852 Make the trunk look similar to v1.1 and v1.2, but return an error if we
can't find "me" in the list of procs, since we should always be in the
proc_world list, or something bad has happened...

This commit was SVN r14025.
2007-03-13 20:17:10 +00:00
Brian Barrett
f59d38dd81 fix stupid compiler warning
This commit was SVN r14024.
2007-03-13 19:45:26 +00:00
George Bosilca
533dfff56d Only do the preconnection stage if we found the local proc. It's mostly to
make some compilers complain less about uninitialized values.

This commit was SVN r13805.
2007-02-26 22:24:44 +00:00
Tim Prins
dbe82c70d6 Get rid of stale file, and remove the (unused) references to it.
This commit was SVN r13771.
2007-02-23 13:50:39 +00:00
Brian Barrett
727f64aecf It appears that SEND_COMPLETE on a 0 byte message with BTLs that don't
support SEND_IN_PLACE causes badness because the BTL tries to use the
not-exactly-complete convertor.  Don't need it in this situation anyway.

This commit was SVN r13700.
2007-02-19 02:43:26 +00:00
Brian Barrett
8b28e5b33d Allow the OOB to connect between all MPI applications during MPI_INIT
without also establishing MPI connectivity. 

This commit was SVN r13595.
2007-02-09 20:17:37 +00:00
Brian Barrett
262cbbc5c9 Back out r13593, which contained a change that shouldn't be committed.
This commit was SVN r13594.

The following SVN revision numbers were found above:
  r13593 --> open-mpi/ompi@81472363ea
2007-02-09 20:13:02 +00:00
Brian Barrett
81472363ea Allow the OOB to connect between all MPI applications during MPI_INIT
without also establishing MPI connectivity.

This commit was SVN r13593.
2007-02-09 20:11:40 +00:00
Ralph Castain
c7be9a7121 Complete backout of prior sched_yield and paffinity changes
This commit was SVN r13530.
2007-02-07 14:22:37 +00:00