When Slurm is built against PMIx, some installations place a copy of the
PMIx library that Slurm is linking against in the Slurm PMI location.
Current configury ignores that location. The desired behavior is to look
for a PMIx lib in that location when --with-pmi is given. If the user
also specifies --with-pmix and gives a different location, then override
anything previously found and look for it where the user directed.
Signed-off-by: Ralph Castain <rhc@pmix.org>
(cherry picked from commit cd1b5641be)
Per https://github.com/open-mpi/ompi/issues/5031, if the user didn't specify a particular PMIx installation, then default back to the internal version if it is newer than the discovered external one. PMIx doesn't yet provide a full signature so we have to just get as close as possible for now.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
(cherry picked from commit 1e6aaf7f22)
Per today's telecon, check for supported version and do not use anything less than 1.2.x. Sadly, we don't include the last piece of the version triplet in the version file and so we cannot check for 1.2.5.
If someone explicitly points us at an external installation that isn't acceptable, then error out
Add PMIx support to summary
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
and do not end up with -L/usr/lib[64] when PMI libraries
are installed in the default location.
Thanks Davide Vanzo for the report.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
This reverts commit c4fe4ecfb9.
Revert "Fix DIR, DIR/include search for --with-pmix"
This reverts commit 2e3f401763.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Unlike "orterun", "prun" is a PMIx-only program that discovers the DVM connection instead of requiring that we explicitly provide it. Only build "prun" if PMIx v2.x is available.
This gets the DVM working again, but still is showing problems for multiple executions. I'll detail those in a separate issue. Thus, the DVM should still be considered "broken".
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
NOTE: Building with external pmix *requires* that you also build with external libevent and hwloc libraries. Detect this at configure and error out with large message if this requirement is violated.
Closes#1204 (replaces it)
Fixes#1064
The mixing of the Slurm PMI and Cray PMI configure was getting
messy and dangerous - developers working on Slurm PMI often don't
have access to Cray PMI, etc.
This mod pulls out the Cray PMI configure into a separate m4 file.
Cray pmi is now configured as follows:
1) on Cray CLE 5 and higher, Cray PMI is auto detected. pkg-config
is used to resolve the necessary CPP flags, link flags and libs,
etc. Nothing needs to be added to the configure line to pick up
Cray PMI.
2) on legacy Cray CLE 4 systems with PMI 4.X, Cray PMI is also
auto detected.
3) on legacy Cray CLE 4 systems with PMI 5.X Cray PMI can't be auto-detected
owing to changes in the PMI pkg-config file which result in pkg-config
returning an error owing to a dependency of PMI on newer versions of ALPS
installs that are not present on CLE 4. So, for those falling in to this
situation, the --with-cray-pmi=(DIR) method needs to be used.
DIR specifies the Cray PMI install directory. The configure file looks
for required alps libraries first in /usr/lib/alps, then in
/opt/cray/xe-sysroot/default/usr/lib/alps.
WHAT: Merge the PMIx branch into the devel repo, creating a new
OPAL “lmix” framework to abstract PMI support for all RTEs.
Replace the ORTE daemon-level collectives with a new PMIx
server and update the ORTE grpcomm framework to support
server-to-server collectives
WHY: We’ve had problems dealing with variations in PMI implementations,
and need to extend the existing PMI definitions to meet exascale
requirements.
WHEN: Mon, Aug 25
WHERE: https://github.com/rhc54/ompi-svn-mirror.git
Several community members have been working on a refactoring of the current PMI support within OMPI. Although the APIs are common, Slurm and Cray implement a different range of capabilities, and package them differently. For example, Cray provides an integrated PMI-1/2 library, while Slurm separates the two and requires the user to specify the one to be used at runtime. In addition, several bugs in the Slurm implementations have caused problems requiring extra coding.
All this has led to a slew of #if’s in the PMI code and bugs when the corner-case logic for one implementation accidentally traps the other. Extending this support to other implementations would have increased this complexity to an unacceptable level.
Accordingly, we have:
* created a new OPAL “pmix” framework to abstract the PMI support, with separate components for Cray, Slurm PMI-1, and Slurm PMI-2 implementations.
* Replaced the current ORTE grpcomm daemon-based collective operation with an integrated PMIx server, and updated the grpcomm APIs to provide more flexible, multi-algorithm support for collective operations. At this time, only the xcast and allgather operations are supported.
* Replaced the current global collective id with a signature based on the names of the participating procs. The allows an unlimited number of collectives to be executed by any group of processes, subject to the requirement that only one collective can be active at a time for a unique combination of procs. Note that a proc can be involved in any number of simultaneous collectives - it is the specific combination of procs that is subject to the constraint
* removed the prior OMPI/OPAL modex code
* added new macros for executing modex send/recv to simplify use of the new APIs. The send macros allow the caller to specify whether or not the BTL supports async modex operations - if so, then the non-blocking “fence” operation is used, if the active PMIx component supports it. Otherwise, the default is a full blocking modex exchange as we currently perform.
* retained the current flag that directs us to use a blocking fence operation, but only to retrieve data upon demand
This commit was SVN r32570.