This fixes a problem reported by @bgoglin where rank-by was incorrectly generating values when ranking by a type of object (e.g., socket). It also corrects the handling of the pernode, npernode, and npersocket options - these should only set the #procs and the default mapping pattern. They specifically should not prohibit the user from requesting a different mapping.
Thus, the following should be valid:
mpirun -npernode 2 --map-by socket ...
should put 2 procs on each node, mapping them by-socket on each node.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
When there is no alias for a given node, do not set the
ORTE_NODE_ALIAS attribute to an empty string any more.
Thanks Erico for reporting this issue.
Thanks Ralph for the guidance.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
An incorrectly named variable caused all pml variables to disappear
from ompi_info. This commit fixes the typo. We may add some logic into
the MCA base to catch these sorts of things in the future.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Shorten the loops as much as possible - if someone wants to further optimize, they are welcome to do so.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
While it may be faster to reverse the order of the assignment loops, it also results in the wrong answer
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Reseting the `ORTE_NODE_FLAG_MAPPED` flag after hosts filtering, this
flag is used subsequently and can be affect to the node mapping logic
Signed-off-by: Boris Karasev <karasev.b@gmail.com>
* The `MPIR_PROCDESC` structure needs to be visible even in optimized
builds so that debuggers can attach to `mpirun` and properly read the
`MPIR_proctable`.
* In the v2.0.x and v2.x series this structure resided in the `orterun`
directory and included the `CFLAGS` fix included here. This code
moved in the v3.x series and the `CFLAGS` did not move causing this
issue.
- Instead of applying the debug `CFLAGS` globally to libopen-rte,
only apply them to the `orted_submit.c` compile which contains the
MPIR symbols.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
The current code path for PMIx_Resolve_peers and PMIx_Resolve_nodes executes a threadshift in the preg components themselves. This is done to ensure thread safety when called from the user level. However, it causes thread-stall when someone attempts to call the regex functions from _inside_ the PMIx code base should the call occur from within an event.
Accordingly, move the threadshift to the client-level functions and make the preg components just execute their algorithms. Create a new pnet/test component to verify that the prge code can be safely accessed - set that component to be selected only when the user directly specifies it. The new component will be used to validate various logical extensions during development, and can then be discarded.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
(cherry picked from commit 456ac7f7af3d9ba09888e3c899eb001daaa24aef)
This is a point-in-time update that includes support for several new PMIx features, mostly focused on debuggers and "instant on":
* initial prototype support for PMIx-based debuggers. For the moment, this is restricted to using the DVM. Supports direct launch of apps under debugger control, and indirect launch using prun as the intermediate launcher. Includes ability for debuggers to control the environment of both the launcher and the spawned app procs. Work continues on completing support for indirect launch
* IO forwarding for tools. Output of apps launched under tool control is directed to the tool and output there - includes support for XML formatting and output to files. Stdin can be forwarded from the tool to apps, but this hasn't been implemented in ORTE yet.
* Fabric integration for "instant on". Enable collection of network "blobs" to be delivered to network libraries on compute nodes prior to local proc spawn. Infrastructure is in place - implementation will come later.
* Harvesting and forwarding of envars. Enable network plugins to harvest envars and include them in the launch msg for setting the environment prior to local proc spawn. Currently, only OmniPath is supported. PMIx MCA params control which envars are included, and also allows envars to be excluded.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Since output-filename has been moved to a per-job attribute,
remove the orte_output_filename global variable, and stop passing
this option to orted.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Sync command line output for Slurm with RSH launcher.
Currently Slurm launch cmdline will only be visible in debug mode, while for RSH
it is enabled always.
cmdline makes sense for troubleshooting and should be enabled.
Signed-off-by: Artem Polyakov <artpol84@gmail.com>
Warn that relative path will be converted to absolute path, meaning that the file system on remote nodes must be the same as on the node where mpirun is executed.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Since we now support the dynamic addition of hosts to the orte_node_pool, there is no longer any reason to require advanced specification of all possible nodes. Instead, use a precedence method to initially allocate only those hosts that were specified in the cmd line:
* rankfile, if given, as that will specify the nodes
* -host, aggregated across all app_contexts
* -hostfile, aggregated across all app_contexts
* default hostfile
* assign local node
Fix slots_inuse accounting so that the nodes are correctly reset upon error termination - e.g., when oversubscribed without permission.
Ensure we accurately track the user's specified desires for oversubscribe and no-use-local when dynamically spawning jobs.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
(cherry picked from commit c9b3e68ce596a68a2ed2fbf73f211b3334b0a6a8)
Fixed the desync of job-nodelists between mpirun and orted
daemons. The issue was observed when using RSH launching because user
can provide arbitrary order of nodes regarding HNP placement.
The mpirun process propagate the daemon's nodelist order to nodes.
The problem was that HNP itself is assembling the nodelist based on
user provided order. As the result ranks assignment was calculated
differently on orted and mpirun.
Consider following example:
* User launches mpirun on node cn2.
* Hostlist is cn1,cn2,cn3,cn4; ppn=1
* mpirun is passing hostlist cn[2:2,1,3-4]@0(4) to orteds
So as result mpirun will assing rank 0 on cn1 while orted will assign
rank 0 on cn2 (because orted sees cn2 as the first element in the node
list)
Signed-off-by: Boris Karasev <karasev.b@gmail.com>
When too much data is available on stdin, it might not be
forwarded immediatly to the task (write() might fail with -EAGAIN),
so when stdin is terminated, there might be some remaining data
to be pushed to the task. In this case, delay the release of the sink
so no data is discarded.
Refs open-mpi/ompi#4744
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
this option was only used by the iof/mr_hnp (aka Map/Reduce)
component that is no more part of master nor v3 branches.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Fix a problem in packing/unpacking job updates. There remains a race condition that causes messages to attempt to be sent to the second new daemon before it is completely ready. Not entirely sure where it is coming from.
Refs #4665
Rebase to master. Reset orte_nidmap_communicated if hosts are added. Check for duplicate hostnames in an add_host command. Turn off tree_spawn for dynamic launch of additional daemons.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Search for the digits to be compressed from the end of the node names.
For example, if the nodelist is c712f6n01,c712f6n02,c712f6n03
the regx/fwd component generates c[3:712]f6n01,c[3:712]f6n02,c[3:712]f6n03@(3)
when the regx/reverse component generates c712f6n[2:1-3]@0(3) which is
a better fit here.
Josh Hursey authored the changes and must be credited.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
typedef int (*orte_regx_base_module_extract_node_names_fn_t)(char *regexp, char ***names);
among other things, that will make testing way easier.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Resolve a race condition between registering for a file to be removed upon termination and actual creation of that file by providing attributes that identify whether the path is a file or directory. This removes the need for PMIx to detect the difference.
Refs #4686
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Handle the need for different regex generator/parsers by moving the
orte/util/nidmap and orte/util/regex code into a new "regx" framework.
Use the original code to complete a "fwd" component, and create a
scaffold for IBM's "reverse" component.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Now that the daemon calls remote_spawn itself, there is no longer
a need for the "tree_spawn" command nor the associated command
processing code since the HNP is no longer sending a tree-spawn
message to the orted.
Thanks Ralph for the guidance !
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
When the node regex is too long to be sent on the command line,
retrieve it first from the parent, and then spawn the remote orted
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
This parameter can be used to set the node regex max length that can
be passed to the orted command line.
For testing purpose, it can be set to zero in order to force the node regex
being retrieved by orted from its parent.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
since open-mpi/ompi@8f496b01b7
sstore_stage_local_compress_waitpid_cb is invoked with an orte_wait_tracker_t *,
that must be used to reach the orte_sstore_stage_local_app_snapshot_info_t *.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
since open-mpi/ompi@8f496b01b7
rsh_wait_daemon is invoked with an orte_wait_tracker_t *,
that must be used to reach the orte_plm_rsh_caddy_t *.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
If we detect that someone has given us an incorrect node name, provide a helpful message telling them as it is almost certainly a typo.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
If available, have apps use registration capability to cleanup their session directories. Setup capability for vader to register its shared memory file location - let someone familiar with that code do so.
Final cleanup to track uid/gid, update the opal/pmix API to pass flags for ignore and leave top directory alone
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Somehow, the code for passing a daemon's parent was accidentally removed, thus breaking the tree-spawn callback sequence and causing all daemons to phone directly home. Note that this is noticeably slower than no-tree-spawn for small clusters where directly ssh launch of the child daemons from the HNP doesn't overload the available file descriptors.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
'-' is not an alpha character nor a digit, but it is a valid hostname
character and should be handled as an alpha character, otherwise, nodes
such as node-001 do not get "compressed" in the regex.
Refs open-mpi/ompi#4621
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Only selectable when specifically requested via "-mca odls pspawn"
Note that there are several concerns:
* we aren't getting SIGCHLD calls when the procs terminate
* we aren't seeing the IO pipes close on termination, though
we are getting output forwarded to mpirun
* I haven't found a way to bind the child process prior to exec.
If we want to use this method, we probably need someone to
implement a cgroup component for the orte/rtc framework
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Change the determination of #spawn threads to be done on basis of #local procs in first job being spawned. Someone can look at an optimization that handles subsequent dynamic spawns that might be larger in size.
Leave the threads running, but blocked, for the life of the daemon, and use them to harvest the local procs as they terminate. This helps short-lived jobs in particular.
Add MCA params to set:
* max number of spawn threads (default: 4)
* set a specific number of spawn threads (default: -1, indicating no set number)
* cutoff - minimum number of local procs before using spawn threads (default: 32)
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
The current error message when the number of slots is insufficient
(e.g. running mpirun -n 4 on a dual core machine) does not mention the
use of `--oversubscribe`.
In earlier version of Open MPI, the over-subscription was automatic
(albeit buggy?); but the important point was no error message was
printed and the application runs. Mentioning the oversubscibe flag in
the message will ease up the transition to the current behaviour where
explicit request is required.
Also make a few other minor tweaks / cleanups to the
orte-rmaps-seq:alloc-error help message.
Signed-off-by: Yu Feng <rainwoodman@gmail.com>
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
if PMIx (version > 1.x) is active since all diagnostic messages will instead flow thru
the PMIx connection. Unfortunately, PMIx v1 does not support this
feature, but we can remove the stddiag support once PMIx v1 slides out
of the support window
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
since some tasks migth end up having /dev/null as their stdin,
simply avoid pipe creation and destruction for these tasks.
From a pragmatic and MPI point of view, and unless explicitly required
otherwise, all MPI tasks but (the first) one end up with /dev/null
as their stdin.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
The following issues have been fixed for `mindist`:
- computing the job map on the backend nodes
- using slots count (`-host node1:<s1>,nodeN:<sN>`)
- fixed `dist:span` job mapping method
- fixed `oversubcribe` option with `-host`
Signed-off-by: Boris Karasev <karasev.b@gmail.com>
gcc 7.[1,2] (at least) fails to correctly parse the OSX 10.13 sys/syslog.h
header. As a results we need to potect syslog support in OPAL, PMIX and
ORTE.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Debounce "unreachable" notifications for tools when they disconnect
Enable the -x cmd line option for prun
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
(cherry picked from commit 0a5b36180a22959654461ac1303cec35313f8b4a)
Remove some build product. Tell PMIx that we don't need a new nspace generated when OMPI calls connect
Add missing Makefile
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Debugger daemons do not count against available slots. Clean up some leftover errors from the upgrade to HWLOC 2 in the mappers. Properly flag debugger jobs that come in via PMIx.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Still in the "needs to be done" category:
* mapping/ranking/binding options aren't correctly supported
* if the DVM encounters some errors (e.g., not enough resources for the job), the resulting error is globally set and impacts any subsequent job submission
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Unlike "orterun", "prun" is a PMIx-only program that discovers the DVM connection instead of requiring that we explicitly provide it. Only build "prun" if PMIx v2.x is available.
This gets the DVM working again, but still is showing problems for multiple executions. I'll detail those in a separate issue. Thus, the DVM should still be considered "broken".
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
to choose the first available non-socket provider.
modified: orte/mca/rml/ofi/rml_ofi_component.c
modified: orte/mca/rml/ofi/rml_ofi_send.c
Signed-off-by: Anandhi Jayakumar <anandhi.s.jayakumar@intel.com>
* Even if we are only launching one app context, we might call spawn
later and the remote groups might want their global rank information.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
* Resolves#3705
* Components should link against the project level library to better
support `dlopen` with `RTLD_LOCAL`.
* Extend the `mca_FRAMEWORK_COMPONENT_la_LIBADD` in the `Makefile.am`
with the appropriate project level library:
```
MCA components in ompi/
$(top_builddir)/ompi/lib@OMPI_LIBMPI_NAME@.la
MCA components in orte/
$(top_builddir)/orte/lib@ORTE_LIB_PREFIX@open-rte.la
MCA components in opal/
$(top_builddir)/opal/lib@OPAL_LIB_PREFIX@open-pal.la
MCA components in oshmem/
$(top_builddir)/oshmem/liboshmem.la"
```
Note: The changes in this commit were automated by the script in
the commit that proceeds it with the `libadd_mca_comp_update.py`
script. Some components were not included in this change because
they are statically built only.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
There can be multiple [heap] consecutively in proc/<pid>/maps,
and there's no room between them.
Don't use a hole after the first [heap] is there's another [heap]
immediately after it.
This code would fail to find the last [heap] if there were multiple
[heap] interleaved with non-heap VMA, but our kind "after heap"
wouldn't be meaningful anymore anyway.
Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr>