1
1
openmpi/orte/runtime
Ralph Castain aec5cd08bd Per the PMIx RFC:
WHAT:    Merge the PMIx branch into the devel repo, creating a new
               OPAL “lmix” framework to abstract PMI support for all RTEs.
               Replace the ORTE daemon-level collectives with a new PMIx
               server and update the ORTE grpcomm framework to support
               server-to-server collectives

WHY:      We’ve had problems dealing with variations in PMI implementations,
               and need to extend the existing PMI definitions to meet exascale
               requirements.

WHEN:   Mon, Aug 25

WHERE:  https://github.com/rhc54/ompi-svn-mirror.git

Several community members have been working on a refactoring of the current PMI support within OMPI. Although the APIs are common, Slurm and Cray implement a different range of capabilities, and package them differently. For example, Cray provides an integrated PMI-1/2 library, while Slurm separates the two and requires the user to specify the one to be used at runtime. In addition, several bugs in the Slurm implementations have caused problems requiring extra coding.

All this has led to a slew of #if’s in the PMI code and bugs when the corner-case logic for one implementation accidentally traps the other. Extending this support to other implementations would have increased this complexity to an unacceptable level.

Accordingly, we have:

* created a new OPAL “pmix” framework to abstract the PMI support, with separate components for Cray, Slurm PMI-1, and Slurm PMI-2 implementations.

* Replaced the current ORTE grpcomm daemon-based collective operation with an integrated PMIx server, and updated the grpcomm APIs to provide more flexible, multi-algorithm support for collective operations. At this time, only the xcast and allgather operations are supported.

* Replaced the current global collective id with a signature based on the names of the participating procs. The allows an unlimited number of collectives to be executed by any group of processes, subject to the requirement that only one collective can be active at a time for a unique combination of procs. Note that a proc can be involved in any number of simultaneous collectives - it is the specific combination of procs that is subject to the constraint

* removed the prior OMPI/OPAL modex code

* added new macros for executing modex send/recv to simplify use of the new APIs. The send macros allow the caller to specify whether or not the BTL supports async modex operations - if so, then the non-blocking “fence” operation is used, if the active PMIx component supports it. Otherwise, the default is a full blocking modex exchange as we currently perform.

* retained the current flag that directs us to use a blocking fence operation, but only to retrieve data upon demand

This commit was SVN r32570.
2014-08-21 18:56:47 +00:00
..
data_type_support Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
help-orte-runtime.txt As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time: 2013-08-20 18:59:36 +00:00
Makefile.am Use the correct abstraction layer name for the data dirs 2014-05-08 14:32:24 +00:00
orte_cr.c MCA/base: Add new MCA variable system 2013-03-27 21:09:41 +00:00
orte_cr.h Correct several export declarations. 2011-08-15 09:45:51 +00:00
orte_data_server.c As per the RFC, bring in the ORTE async progress code and the rewrite of OOB: 2013-08-22 16:37:40 +00:00
orte_data_server.h Merge the ORTE devel branch into the main trunk. Details of what this means will be circulated separately. 2008-02-28 01:57:57 +00:00
orte_finalize.c Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
orte_globals.c Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
orte_globals.h Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
orte_info_support.c Per RFC add initial support for the MPI 3.0 tools interface. 2013-04-24 15:59:23 +00:00
orte_info_support.h Update OMPI frameworks to use the MCA framework system. 2013-03-27 21:17:31 +00:00
orte_init.c Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
orte_locks.c Start reducing our dependency on the event library by removing at least one instance where we use it to redirect the program counter. Rolf reported occasional hangs of mpirun in very specific circumstances after all daemons were done. A review of MTT results indicates this may have been happening more generally in a small fraction of cases. 2010-07-17 21:03:27 +00:00
orte_locks.h Start reducing our dependency on the event library by removing at least one instance where we use it to redirect the program counter. Rolf reported occasional hangs of mpirun in very specific circumstances after all daemons were done. A review of MTT results indicates this may have been happening more generally in a small fraction of cases. 2010-07-17 21:03:27 +00:00
orte_mca_params.c Per the PMIx RFC: 2014-08-21 18:56:47 +00:00
orte_quit.c Cleanup a set of typos on the orte_get_attribute call 2014-06-03 20:36:38 +00:00
orte_quit.h Sorry for mid-day commit, but I had promised on the call to do this upon my return. 2012-04-06 14:23:13 +00:00
orte_wait.c Remove useless variables. 2014-07-03 00:30:54 +00:00
orte_wait.h Revert r32222, r32210, and r32203 as they created a problem when daemon collectives did not involve app procs on every node. Instead, modify the ompi/mca/rte/orte/rte_orte.h to add a new function that allows apps to request new daemon collective ids for use in barrier and modex operations. This will only appear in ORTE-based installations, but it is only being used by a couple of researchers at the moment. 2014-07-15 03:48:00 +00:00
runtime_internals.h Modify the accounting system to recycle jobids. Properly recover resources from nodes and jobs upon completion. Adjustments in several places were required to deal with sparsely populated job, node, and proc arrays as a result of this change. 2009-03-03 16:39:13 +00:00
runtime.h As per the RFC, bring in the ORTE async progress code and the rewrite of OOB: 2013-08-22 16:37:40 +00:00