1
1

13 Коммитов

Автор SHA1 Сообщение Дата
Ralph Castain
955d8e7d46 Allow apps to use pmi when launched by mpirun, if desired, without affecting daemons
This commit was SVN r25359.
2011-10-23 15:57:13 +00:00
Nathan Hjelm
7b1172b346 need a terminating character in the decoded string
This commit was SVN r25355.
2011-10-21 16:46:28 +00:00
Nathan Hjelm
cd257ac707 fixed typo in pmi grpcomm
This commit was SVN r25353.
2011-10-21 16:28:36 +00:00
Ralph Castain
3e72fccacf Cray's PMI implementation is quite different from slurm's - they extended PMI-1 by adding some, but not all, of the PMI-2 APIs. So you can't just switch to using PMI-2 functions as it isn't a complete implementation. Instead, you have to selectively figure out which ones they have in PMI-2, and use any missing ones from PMI-1. What fun.
Modify the configure logic and the PMI components to accommodate Cray's approach. Refactor the PMI error reporting code so it resides in only one place. Cray actually decided -not- to define the PMI-2 error codes, so we have to use the PMI-1 codes instead. More fun.

This commit was SVN r25348.
2011-10-21 04:54:38 +00:00
Nathan Hjelm
beb8d8ce32 pmi return code wtf
This commit was SVN r25336.
2011-10-20 17:51:24 +00:00
Ralph Castain
b44f8d4b28 Complete implementation of the ess.proc_get_locality API. Up to this point, the API was only capable of telling if the specified proc was sharing a node with you. However, the returned value was capable of telling you much more detailed info - e.g., if the proc shares a socket, a cache, or numa node. We just didn't have the data to provide that detail.
Use hwloc to obtain the cpuset for each process during mpi_init, and share that info in the modex. As it arrives, use a new opal_hwloc_base utility function to parse the value against the local proc's cpuset and determine where they overlap. Cache the value in the pmap object as it may be referenced multiple times.

Thus, the return value from orte_ess.proc_get_locality is a 16-bit bitmask that describes the resources being shared with you. This bitmask can be tested using the macros in opal/mca/paffinity/paffinity.h

Locality is available for all procs, whether launched via mpirun or directly with an external launcher such as slurm or aprun.

This commit was SVN r25331.
2011-10-19 20:18:14 +00:00
Ralph Castain
8f0ef54130 Complete implementation of pmi support. Ensure we support both mpirun and direct launch within same configuration to avoid requiring separate builds. Add support for generic pmi, not just under slurm. Add publish/subscribe support, although slurm's pmi implementation will just return an error as it hasn't been done yet.
This commit was SVN r25303.
2011-10-17 20:51:22 +00:00
Ralph Castain
07dbbc6513 Sorry for mid-day correction - but folks are trying to test this, and we didn't realize it was still ignored :-(
This commit was SVN r25287.
2011-10-14 16:19:20 +00:00
Ralph Castain
b96ef2161d Complete the PMI support. Generalize PMI operations to support both slurm and non-slurm environments. Correct some configuration issues - we really only want the PMI integration at the individual component level. Ensure that the pmi grpcomm component doesn't get selected when launching via mpirun by setting its priority below the bad component.
Only verified in a slurm environment as that's all I have access to...

This commit was SVN r25275.
2011-10-12 20:59:25 +00:00
Ralph Castain
2f38ff5e54 Ensure we don't try to build this module unless pmi is specifically requested
This commit was SVN r25252.
2011-10-11 06:12:04 +00:00
Ralph Castain
baefdabd98 Add some debug. Now confirmed to work correctly (prior problem was with odin tcp connection, not code).
This commit was SVN r25249.
2011-10-11 02:15:17 +00:00
Ralph Castain
1aa1c2e9b4 Get the slurm pmi support working. Cannot use infiniband, of course, as the oob can't make the connection - may try other existing methods. Modex may not quite be working right yet
as odin was having trouble making TCP connections, but at least the configure now works so things build, so save that for now

This commit was SVN r25247.
2011-10-10 21:39:10 +00:00
Ralph Castain
92a65f21bf Restore slurm pmi support from long, long ago. Since we already have the ability to directly srun an MPI job, just conditionally add the PMI support for key values and provide a grpcomm module that uses PMI for barriers and modex.
Currently ompi_ignored, and unignored only for me (others to soon follow).

This commit was SVN r24792.
2011-06-20 21:04:46 +00:00