1
1
openmpi/ompi/mca/mtl/mxm
Ralph Castain 611d7f9f6b When we direct launch an application, we rely on PMI for wireup support. In doing so, we lose the de facto data compression we get from the ORTE modex since we no longer get all the wireup info from every proc in a single blob. Instead, we have to iterate over all the procs, calling PMI_KVS_get for every value we require.
This creates a really bad scaling behavior. Users have found a nearly 20% launch time differential between mpirun and PMI, with PMI being the slower method. Some of the problem is attributable to poor exchange algorithms in RM's like Slurm and Alps, but we make things worse by calling "get" so many times.

Nathan (with a tad advice from me) has attempted to alleviate this problem by reducing the number of "get" calls. This required the following changes:

* upon first request for data, have the OPAL db pmi component fetch and decode *all* the info from a given remote proc. It turned out we weren't caching the info, so we would continually request it and only decode the piece we needed for the immediate request. We now decode all the info and push it into the db hash component for local storage - and then all subsequent retrievals are fulfilled locally

* reduced the amount of data by eliminating the exchange of the OMPI_ARCH value if heterogeneity is not enabled. This was used solely as a check so we would error out if the system wasn't actually homogeneous, which was fine when we thought there was no cost in doing the check. Unfortunately, at large scale and with direct launch, there is a non-zero cost of making this test. We are open to finding a compromise (perhaps turning the test off if requested?), if people feel strongly about performing the test

* reduced the amount of RTE data being automatically fetched, and fetched the rest only upon request. In particular, we no longer immediately fetch the hostname (which is only used for error reporting), but instead get it when needed. Likewise for the RML uri as that info is only required for some (not all) environments. In addition, we no longer fetch the locality unless required, relying instead on the PMI clique info to tell us who is on our local node (if additional info is required, the fetch is performed when a modex_recv is issued).

Again, all this only impacts direct launch - all the info is provided when launched via mpirun as there is no added cost to getting it

Barring objections, we may move this (plus any required other pieces) to the 1.7 branch once it soaks for an appropriate time.

This commit was SVN r29040.
2013-08-17 00:49:18 +00:00
..
configure.m4 Revamp the handling of wrapper compiler flags. The user flags, main configure 2013-01-29 00:00:43 +00:00
help-mtl-mxm.txt Fix MXM connection establishment flow 2013-04-12 16:37:42 +00:00
Makefile.am initial implementation of MXM MTL layer 2011-07-26 04:36:21 +00:00
mtl_mxm_cancel.c MTL MXM: push commit r27987 back, now with right user. 2013-02-04 06:59:24 +00:00
mtl_mxm_component.c fix: detect threading model to take appropriate flow in mxm 2013-06-16 08:40:06 +00:00
mtl_mxm_debug.h initial implementation of MXM MTL layer 2011-07-26 04:36:21 +00:00
mtl_mxm_endpoint.c initial implementation of MXM MTL layer 2011-07-26 04:36:21 +00:00
mtl_mxm_endpoint.h remove unused includes 2011-08-03 07:07:29 +00:00
mtl_mxm_probe.c rename ompi_free_list operations to _mt, as per discussions at last face to face meeting 2013-07-08 22:07:52 +00:00
mtl_mxm_recv.c rename ompi_free_list operations to _mt, as per discussions at last face to face meeting 2013-07-08 22:07:52 +00:00
mtl_mxm_request.h MTL MXM: push commit r27987 back, now with right user. 2013-02-04 06:59:24 +00:00
mtl_mxm_send.c MTL MXM: STREAM supporting for isend and irecv. 2013-02-27 13:21:30 +00:00
mtl_mxm_types.h Fix data corruption in MXM by registering to OPAL memory release hooks and removing any mappings created by mxm 2013-05-14 12:27:44 +00:00
mtl_mxm.c When we direct launch an application, we rely on PMI for wireup support. In doing so, we lose the de facto data compression we get from the ORTE modex since we no longer get all the wireup info from every proc in a single blob. Instead, we have to iterate over all the procs, calling PMI_KVS_get for every value we require. 2013-08-17 00:49:18 +00:00
mtl_mxm.h Fixed macro definition order in MXM component headers 2013-04-24 16:51:43 +00:00