1
1
Граф коммитов

15 Коммитов

Автор SHA1 Сообщение Дата
Ralph Castain
54b2cf747e These changes were mostly captured in a prior RFC (except for #2 below) and are aimed specifically at improving startup performance and setting up the remaining modifications described in that RFC.
The commit has been tested for C/R and Cray operations, and on Odin (SLURM, rsh) and RoadRunner (TM). I tried to update all environments, but obviously could not test them. I know that Windows needs some work, and have highlighted what is know to be needed in the odls process component.

This represents a lot of work by Brian, Tim P, Josh, and myself, with much advice from Jeff and others. For posterity, I have appended a copy of the email describing the work that was done:

As we have repeatedly noted, the modex operation in MPI_Init is the single greatest consumer of time during startup. To-date, we have executed that operation as an ORTE stage gate that held the process until a startup message containing all required modex (and OOB contact info - see #3 below) info could be sent to it. Each process would send its data to the HNP's registry, which assembled and sent the message when all processes had reported in.

In addition, ORTE had taken responsibility for monitoring process status as it progressed through a series of "stage gates". The process reported its status at each gate, and ORTE would then send a "release" message once all procs had reported in.

The incoming changes revamp these procedures in three ways:

1. eliminating the ORTE stage gate system and cleanly delineating responsibility between the OMPI and ORTE layers for MPI init/finalize. The modex stage gate (STG1) has been replaced by a collective operation in the modex itself that performs an allgather on the required modex info. The allgather is implemented using the orte_grpcomm framework since the BTL's are not active at that point. At the moment, the grpcomm framework only has a "basic" component analogous to OMPI's "basic" coll framework - I would recommend that the MPI team create additional, more advanced components to improve performance of this step.

The other stage gates have been replaced by orte_grpcomm barrier functions. We tried to use MPI barriers instead (since the BTL's are active at that point), but - as we discussed on the telecon - these are not currently true barriers so the job would hang when we fell through while messages were still in process. Note that the grpcomm barrier doesn't actually resolve that problem, but Brian has pointed out that we are unlikely to ever see it violated. Again, you might want to spend a little time on an advanced barrier algorithm as the one in "basic" is very simplistic.

Summarizing this change: ORTE no longer tracks process state nor has direct responsibility for synchronizing jobs. This is now done via collective operations within the MPI layer, albeit using ORTE collective communication services. I -strongly- urge the MPI team to implement advanced collective algorithms to improve the performance of this critical procedure.


2. reducing the volume of data exchanged during modex. Data in the modex consisted of the process name, the name of the node where that process is located (expressed as a string), plus a string representation of all contact info. The nodename was required in order for the modex to determine if the process was local or not - in addition, some people like to have it to print pretty error messages when a connection failed.

The size of this data has been reduced in three ways:

(a) reducing the size of the process name itself. The process name consisted of two 32-bit fields for the jobid and vpid. This is far larger than any current system, or system likely to exist in the near future, can support. Accordingly, the default size of these fields has been reduced to 16-bits, which means you can have 32k procs in each of 32k jobs. Since the daemons must have a vpid, and we require one daemon/node, this also restricts the default configuration to 32k nodes.

To support any future "mega-clusters", a configuration option --enable-jumbo-apps has been added. This option increases the jobid and vpid field sizes to 32-bits. Someday, if necessary, someone can add yet another option to increase them to 64-bits, I suppose.

(b) replacing the string nodename with an integer nodeid. Since we have one daemon/node, the nodeid corresponds to the local daemon's vpid. This replaces an often lengthy string with only 2 (or at most 4) bytes, a substantial reduction.

(c) when the mca param requesting that nodenames be sent to support pretty error messages, a second mca param is now used to request FQDN - otherwise, the domain name is stripped (by default) from the message to save space. If someone wants to combine those into a single param somehow (perhaps with an argument?), they are welcome to do so - I didn't want to alter what people are already using.

While these may seem like small savings, they actually amount to a significant impact when aggregated across the entire modex operation. Since every proc must receive the modex data regardless of the collective used to send it, just reducing the size of the process name removes nearly 400MBytes of communication from a 32k proc job (admittedly, much of this comm may occur in parallel). So it does add up pretty quickly.


3. routing RML messages to reduce connections. The default messaging system remains point-to-point - i.e., each proc opens a socket to every proc it communicates with and sends its messages directly. A new option uses the orteds as routers - i.e., each proc only opens a single socket to its local orted. All messages are sent from the proc to the orted, which forwards the message to the orted on the node where the intended recipient proc is located - that orted then forwards the message to its local proc (the recipient). This greatly reduces the connection storm we have encountered during startup.

It also has the benefit of removing the sharing of every proc's OOB contact with every other proc. The orted routing tables are populated during launch since every orted gets a map of where every proc is being placed. Each proc, therefore, only needs to know the contact info for its local daemon, which is passed in via the environment when the proc is fork/exec'd by the daemon. This alone removes ~50 bytes/process of communication that was in the current STG1 startup message - so for our 32k proc job, this saves us roughly 32k*50 = 1.6MBytes sent to 32k procs = 51GBytes of messaging.

Note that you can use the new routing method by specifying -mca routed tree - if you so desire. This mode will become the default at some point in the future.


There are a few minor additional changes in the commit that I'll just note in passing:

* propagation of command line mca params to the orteds - fixes ticket #1073. See note there for details.

* requiring of "finalize" prior to "exit" for MPI procs - fixes ticket #1144. See note there for details.

* cleanup of some stale header files

This commit was SVN r16364.
2007-10-05 19:48:23 +00:00
Mohamad Chaarawi
59a7bf8a9f Merging in the Sparse Groups..
This commit includes config changes..

This commit was SVN r15764.
2007-08-04 00:41:26 +00:00
Jeff Squyres
d3f008492f Introduce a new debugging MCA parameter:
mpi_show_mpi_alloc_mem_leaks

When activated, MPI_FINALIZE displays a list of memory allocations
from MPI_ALLOC_MEM that were not freed by MPI_FREE_MEM (in each MPI
process).

 * If set to a positive integer, display only that many leaks.
 * If set to a negative integer, display all leaks.
 * If set to 0, do not show any leaks.

This commit was SVN r15736.
2007-08-01 21:33:25 +00:00
Brian Barrett
a25ce44dc1 Clean up the preconnect code:
* Don't need the 2 process case -- we'll send an extra message, but
    at very little cost and less code is better.
  * Use COMPLETE sends instead of STANDARD sends so that the connection
    is fully established before we move on to the next connection.  The
    previous code was still causing minor connection flooding for huge
    numbers of processes.
  * mpi_preconnect_all now connects both OOB and MPI layers.  There's
    also mpi_preconnect_mpi and mpi_preconnect_oob should you want to
    be more specific.
  * Since we're only using the MCA parameters once at the beginning
    of time, no need for global constants.  Just do the quick param
    lookup right before the parameter is needed.  Save some of that
    global variable space for the next guy.

Fixes trac:963

This commit was SVN r14553.

The following Trac tickets were found above:
  Ticket 963 --> https://svn.open-mpi.org/trac/ompi/ticket/963
2007-05-01 04:49:36 +00:00
Brian Barrett
8b28e5b33d Allow the OOB to connect between all MPI applications during MPI_INIT
without also establishing MPI connectivity. 

This commit was SVN r13595.
2007-02-09 20:17:37 +00:00
Brian Barrett
262cbbc5c9 Back out r13593, which contained a change that shouldn't be committed.
This commit was SVN r13594.

The following SVN revision numbers were found above:
  r13593 --> open-mpi/ompi@81472363ea
2007-02-09 20:13:02 +00:00
Brian Barrett
81472363ea Allow the OOB to connect between all MPI applications during MPI_INIT
without also establishing MPI connectivity.

This commit was SVN r13593.
2007-02-09 20:11:40 +00:00
Jeff Squyres
52ca6cf86c The mpi_leave_pinned and mpi_leave_pinned_pipeline MCA parameters were
needlessly registered in multiple different places, and none of them
had a good help string.  There was also an inconsistent check for
setting both mpi_leave_pinned and mpi_leave_pinned_pipeline (i.e., it
was only in ob1).  This commit moves the registration of these params
to one central place (ompi/runtime/ompi_mpi_params.c, with all other
mpi_* MCA params) and uses globals to propagate the values as
relevant.  The error check was also moved to the central location to
ensure that we can consistency everywhere.

This commit was SVN r13226.
2007-01-21 14:02:06 +00:00
Jeff Squyres
b6c6d9a2b7 Bring over r10877 and r10881 from the /tmp/tbird branch:
r10877:
add warm up connection option.. of course this only warms up the first
eager btl but this should be adequate for now..

r10881:
Consulted with Galen and did a few things:

- Fix the algorithm to actually make the connections that we want
- Rename the MCA param to mpi_preconnect_all
- Cleanup the code a bit:
  - move the logic to a separate .c file
  - check return codes properly

This commit was SVN r11114.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r10877
  r10877
  r10881
  r10881
2006-08-04 14:41:31 +00:00
Jeff Squyres
51c5516815 Add a new MCA parameter: mpi_keep_peer_hostnames. If this is nonzero,
(which is currently the default, although we may argue over this later
:-) ), a new field in the ompi_proc_t named proc_hostname will have
the string hostname of that peer.  If 0, this field will be NULL.

This allows for printing nicer error messages in environments where
peer hostnames are not otherwise easily obtainable, such as the mvapi
BTL (requested by Sandia, who has both a *huge* number of nodes and
6GB of RAM per node, so they don't care about the extra memory usage
;-) ).

This commit was SVN r9902.
2006-05-11 19:46:21 +00:00
Brian Barrett
becc55abf6 * add missing extern in header file
This commit was SVN r9493.
2006-03-31 02:45:06 +00:00
Jeff Squyres
fd61d78599 Add two MCA parameters to the MPI level to control behavior during
MPI_ABORT.  From the ompi_info output:

       MCA mpi: parameter "mpi_abort_delay" (current value: "0")
                If nonzero, print out an identifying message when
                MPI_ABORT is invoked (hostname, PID of the process
                that called MPI_ABORT) and delay for that many seconds
                before exiting (a negative delay value means to never
                abort).  This allows attaching of a debugger before
                quitting the job.
       MCA mpi: parameter "mpi_abort_print_stack" (current value: "0")
                If nonzero, print out a stack trace when MPI_ABORT is
                invoked

This commit was SVN r9487.
2006-03-31 00:31:15 +00:00
Jeff Squyres
42ec26e640 Update the copyright notices for IU and UTK.
This commit was SVN r7999.
2005-11-05 19:57:48 +00:00
Jeff Squyres
f5cc86fa07 - Add MCA param mpi_paffinity_alone, which, when nonzero, will assume
that this ORTE job is the only one on the nodes involved, and if
  told what processors to assign the processes to, will bind MPI
  processes to specific processors.
- Convert #include's to new style
- Convert some <tab>'s to spaces

This commit was SVN r6904.
2005-08-16 16:17:52 +00:00
Jeff Squyres
35c141aef6 While we're moving directories around, move ompi/mpi/runtime ->
ompi/runtime, for consistency and parallel-ness will orte/runtime.
Also remove a few useless #includes along the way.

This commit was SVN r6317.
2005-07-03 12:07:29 +00:00