1
1
Граф коммитов

371 Коммитов

Автор SHA1 Сообщение Дата
Ralph Castain
efa50346c8 Error out if we are filtering a hostfile and encounter a node that is not in the resource-managed allocation, giving an error message identifying the file and the node. Don't filter managed allocations thru a default hostfile as this can lead to "hidden" errors.
Don't use dash-host info on managed allocations if we using soft locations

This commit was SVN r27245.
2012-09-05 19:42:00 +00:00
Ralph Castain
d772e0fc3d Add an option to treat dash-host specifications as "requested, but not required". So-called "soft" location requests can allow an application to execute even if the ideal allocation isn't available.
This commit was SVN r27242.
2012-09-05 18:42:09 +00:00
Ralph Castain
fde83a44ab This confusion has been around for awhile, caused by a long-ago decision to track slots allocated to a specific job as opposed to allocated to the overall mpirun instance. We eliminated that quite a while ago, but never consolidated the "slots_alloc" and "slots" fields in orte_node_t. As a result, confusion has grown in the code base as to which field to look at and/or update.
So (finally) consolidate these two fields into one "slots" field. Add a field in orte_job_t to indicate when all the procs for a job will be launched together, so that staged operations can know when MPI operations are allowed.

This commit was SVN r27239.
2012-09-05 01:30:39 +00:00
Ralph Castain
11de735e8a Complete the revamp of hostfile support in non-managed environments. Working at the app level, ensure that we utilize only those nodes specified for that app, but fall back to the default hostfile (if available) for those with no specification, further falling back to the local host if the default hostfile is not present or is empty.
This commit was SVN r27230.
2012-09-04 16:34:05 +00:00
Jeff Squyres
b23a6b8eda Shiqing removed this file in r27217 (but neglected to remove it from
the Makefile.am).

This commit was SVN r27226.

The following SVN revision numbers were found above:
  r27217 --> open-mpi/ompi@ddbd542732
2012-09-04 13:06:39 +00:00
Ralph Castain
3894179e2f Add missing file
This commit was SVN r27222.
2012-09-04 01:16:58 +00:00
Shiqing Fan
ddbd542732 Remove one .windows file.
Add a macro definition for isblank function.

This commit was SVN r27217.
2012-09-03 09:51:44 +00:00
Ralph Castain
66c3f5d18d When getting target nodes for mapping, there is a difference between not finding any nodes that match the required constraints (either in hostfile or dash-host filtering) and finding at least one such node, but all its slots are busy. Make the return code reflect this difference so the caller can take appropriate action.
This commit was SVN r27213.
2012-09-01 10:30:40 +00:00
Ralph Castain
95019cc310 Fix a few places where we weren't completely identifying hostfile-based operations against "localhost" entries. Tell the mapper base to be silent when we don't want errors announced because nodes aren't available for mapping (something it is okay if they are fully used). Fix an infinite loop in the file prepositioning code.
This commit was SVN r27210.
2012-08-31 21:28:49 +00:00
Jeff Squyres
da00d281e6 Oops -- we want the priority to be low, not high (for now). :-)
This commit was SVN r27208.
2012-08-31 21:08:35 +00:00
Jeff Squyres
fcc1c7e33c = Overview =
First revision of the Locatation Aware Mapping Algorithm (LAMA) RMAPS
component.  This component is used to effect many different types of
regular of process/processor affinity patterns.  Although quite
flexible in the patterns that it provides, it is ''not'' a
fully-arbitrary, rankfile-like solution for process/processor
affinity.  

Inspiried by !BlueGene-like network specifications, LAMA has a core
algorithm that is quite good at specifying regular patterns in
multiple "dimensions" (where "dimensions" are expressed in terms of
different hardware elements: processor hardware threads, cores,
sockets, ...etc.).  The LAMA core algorithm is described here:

  http://www.open-mpi.org/papers/cluster-2011-lama/

= LAMA Usage Levels =

LAMA allows specifying affinity multiple different ways:

 1. None: Speciying no affinity options to mpirun results in exactly
    the same behavior as today: no affinity is used.
 1. Simple: Using the mpirun options "--bind-to <WIDTH>" and "--map-to
    <LEVEL>" to indicate how "wide" each process should be bound
    (i.e., bind to a processor core, or to a processor socket, etc.)
    and how to lay out the processes (i.e., round robin by cores,
    sockets, etc.).  
 1. Expert: Using four new MCA parameters to effect process mapping
    and binding to processors.  These options are a bit complex, and
    are not for the faint at heart, but offer a high degree of
    (regular pattern) flexibility (each of these are described more
    fully below): 
    * rmaps_lama_map: a sequence of characters describing how to lay
      out processes
    * rmaps_lama_bind: a sequence of characters describing the
      resources to bind to each process
    * rmaps_lama_mppr: a sequence of characters describing the maximum
      number of processes to allow per resource (i.e., a specific
      definition of "oversubscription")
    * rmaps_lama_ordering: once all processes are in place, how to
      order the ranks in MPI_COMM_WORLD

We anticipate that most users will utilize the "None" and "Simple"
levels of affinity, and they continue to work just as they do with the
v1.6 series and SVN trunk.  

The Expert level was designed for two purposes:

 1. To provide a precise definition for the "Simple" level (i.e.,
 every
    --bind-to/--map-by option in the "Simple" level has a
 corresponding
    precise specification in the "Expert" level)
 1. As modern computing platforms become more complex, we simply
    cannot predict what application developers will need in terms of
    processor affinity.  LAMA is an attempt to provide a highly
    flexible mechanism that allows applications to utilize a variety
    of complex, unique affinity patterns beyond the common "bind to
    core" and "bind to socket" patterns.

= LAMA Simple Level =

The "Simple" level is pretty much the same as what Open MPI has
offered for years.  It supports the same --bind-to and --map-by
options that Open MPI has supported for a while, but expands their
scope a bit.

Specifically, the following options are available for both --bind-to
and --map-by:

 * slot
 * hwthread
 * core
 * l1cache
 * l2cache
 * l3cache
 * socket
 * numa
 * board
 * node

= LAMA Expert Level =

The "Expert" level requires some explanation.  I'll repeat my
disclaimer here: the LAMA Expert level is not for the meek.  It is
flexible, but complex.  '''Most users won't need the Expert level.'''

LAMA works in three phases: mapping, binding, and ordering.  Each is
described below.

== Expert: Mapping ==

Processes are paired with sets of resources.  For example, each
process may be paired with a single processor core.  Or each process
may be paired with an entire processor socket.  LAMA performs this
mapping, obeying the Max Processes Per Resource ("MPPR", pronounced
"mipper") limits.  More on MPPR, below.

Mapping can be performed across multiple hardware levels:

 * h: Hardware thread
 * c: Processor core
 * s: Processor socket
 * L1: L1 cache
 * L2: L2 cache
 * L3: L3 cache
 * N: NUMA node
 * b: Processor board
 * n: Server node

If the act of mapping is that of pairing MPI processes to the
resources that have been allocated to a job, one can easily imagine
looping through all the resources and assigning processes to them.

But to effect different process process layout patterns across those
resources, one may want to loop over those resources ''in a different
order.''  That is, if the above-mentioned nine hardware resources
(hardware thread, processor core, etc.) can be thought of as an
nine-dimensional space, you can imagine nine nested loops to traverse
all of them.  And you can imagine that changing the order of nesting
would change the traversal pattern.

LAMA accepts a sequence of tokens representing the above-mentioned
nine hardware resources to specify the order of looping when mapping
resources to processes.

For example, consider a "simple" traversal: csL1L2L3Nbnh.  Reading
that sequence of letters from left-to-right, it specifies mapping by
processor core, processor socket, L1 cache, L2 cache, L3 cache, NUMA
node, processor board, server node, and finally hardware thread.

Wait... what?  That string specifies resources from "smallest" to
"largest" -- with the exception of hardware threads.  Why are they
tacked on to the end?

In short, this string of letters means "map by round robin by core" --
(indeed, it exactly corresponds to the Simple level "--map-by core").
Specifically, LAMA traverses the string from left-to-right and maps
processes to all the resources indicated by that token (e.g., "c" for
processor core).  When there are no more resources indicated by that
token, it goes on to the next token.

Hence, in this case, LAMA will map the first process to the first
core, then it will map the second process to the second core, and so
on.

Once all the cores are exhausted, LAMA effectively ignores all the
other letters until "h" (because all the other resources are made up
of cores; when cores are exhausted, those resources are exhausted,
too).  

If there are still more processes to be mapped, LAMA will then
traverse all the hyperthreads -- meaning that the next process will be
mapped to the second hyperthread on the first core.  And the next
process will be mapped to the second hyperthread on the second core.
And so on.

Keep in mind that the cores involved may span many server nodes; we're
not just talking about the cores (etc.) in a single machine.

As another example, the sequence "sL1L2L3Nbnch" is exactly equivalent
to "--map-by socket" (i.e., LAMA maps the first process to the first
socket, the second process to the second socket, and so on).

The sequence of letter can be combined in many, many different ways to
produce many different regular mapping patterns.

=== Max Processes Per Resource (MPPR) ===

The MPPR is an expression that precisely defines the maximum number of
processes that can be mapped to any single resource.  In effect, it
defines the concept of "oversubscription."  Specifically, traditional
HPC wisdom is that "oversubscription" is when there is more than one
MPI process per processor core.

This conventional defintion is expressed in a MPPR string of "1:c"
(one process per core).  

But what if your MPI processes are multi-threaded, and they need
multiple processes per core?  You'd need a different description of
"oversubscription" in this case.  Perhaps you want to have one MPI
process per socket.  This would be expressed in a MPPR string of
"1:s".

The general form of an individual MPPR specification is an integer
follow by a colon, followed by any of the tokens from mapping can be
used in the MPPR specification.  For example "1:c" is pronounced "one
process per core."

Multiple MPPR specifications can be strung together into a
comma-delimited list, too.  All of these MPPR values and then taken
into account when mapping.  Here's some examples:

 * 1:c -- allow, at most, one process per processor core (i.e., don't
   schedule by hyperthread)
 * 1:s -- allow, at most, one process per processor socket (e.g., 
   that process may be multithreaded, or wants exclusive use of the
   socket's caches)
 * 1:s,2:n -- only allow one process per processor socket, but, at
   most, two processes per server node (e.g., if the two MPI processes
   will consume all the RAM on the server node, even if there are more
   processor cores available)

If mapping all processes to resources would exceed a MPPR limit, this
job is ruled to be oversubscribed.  If --oversubscribe was specified
on the mpirun command line, the job continues.  Otherwise, LAMA will
abort the job.

Additionally, if --oversubscribe is specified, LAMA will endlessly
cycle through the mapping token string untill all processes have been
mapped.

== Expert: Binding == 

Once processes have been paired with resources during the Mapping
stage, they are optionally bound to a (potentially different) set of
resources.  For example, processes may be mapped round robin by
processor socket, but bound to an individual processor core.

To be clear: if binding is not used, then mapping is effectively
reduced to "counting how many processes end up on each server node."
Without binding, there's no enforcement that a process will stay where
LAMA thinks it was placed.  

With binding, however, processes are bound to a set of hardware
threads.  The number of threads to which the process is bound is
sometimes referred to as the "binding width".  For example, if a
process is bound to all the hardware threads in a processor socket,
its "width" is the processor socket.

(note that we specifically do not say that the hardware threads are
sequential, even if they are all within a single resource such as a
processor core or socket.  BIOS ordering of hardware threads can be
wonky; so we only refer to "sets of hardware threads")

Bindings are expressed as an integer and a token from the mapping
string.  For example "1s" means "bind each process to one processor
socket" (there is no ":" in the binding string because the ":" is
pronounced as "per" when reading the MPPR string).

Note that it only makes sense to bind processes to a single resource
specification (unlike the MPPR specification, where multiple limits
can be specified).

== Expert: Ordering ==

Finally, processes are assigned a rank in MPI_COMM_WORLD.  LAMA
currently offers two ordering modes: sequential or natural:

 * Sequential: if you laid out all the hardware resources in a single
   line, and then overlaid all the MPI processes on top of them, they
   are ordered from 0 to (N-1) from left-to-right.
 * Natural: the ordering of ranks follows the mapping ordering.  For
   example, consider a server node with two processor sockets, each
   containing four cores.  The command line "mpirun -np 8 --bind-to
   core --map-by socket --order n a.out" would result in MCW ranks
   that look like this: [0 2 4 6] [1 3 5 7].

= Execution =

At this point, the job is fully mapped, optionally bound, and its
ranks in MPI_COMM_WORLD are ordered.  It now starts its execution.

= Final Notes =

Note that at this point, lama is not the default mapper.  It must be
activiated with "--mca rmaps lama".  We'll continue to do further
testing and comparitive analysis with the current set of ORTE mappers.

Also, note that the LAMA algorithm can handle heterogeneity between
hardware resources (e.g., an MPI job spanning server nodes with
differing numbers of processor sockets).  For lack of a longer
explanation (this commit message already long enough!), LAMA considers
each server node individually during mapping and binding.

See the LAMA paper for more details:
http://www.open-mpi.org/papers/cluster-2011-lama/

This commit was SVN r27206.
2012-08-31 19:57:53 +00:00
Ralph Castain
1b659de132 Get staged execution working on multi-node setups. Improve efficiency by only remapping if all procs not yet mapped in the job.
This commit was SVN r27181.
2012-08-29 20:35:52 +00:00
Ralph Castain
f0077820f2 Silence warning
This commit was SVN r27175.
2012-08-28 22:27:41 +00:00
Ralph Castain
a414ffdf4c Remove debug
This commit was SVN r27174.
2012-08-28 22:18:00 +00:00
Ralph Castain
98580c117b Introduce staged execution. If you don't have adequate resources to run everything without oversubscribing, don't want to oversubscribe, and aren't using MPI, then staged execution lets you (a) run as many procs as there are available resources, and (b) start additional procs as others complete and free up resources. Adds a new mapper as well as a new state machine.
Remove some stale configure.m4's we no longer need.

Optimize the nidmaps a bit by only sending info that has changed each time, instead of sending a complete copy of everything. Makes no difference for the typical MPI job - only impacts things like staged execution where we are sending multiple (possibly many) launch messages.

This commit was SVN r27165.
2012-08-28 21:20:17 +00:00
Ralph Castain
aadfe1b61e Fix a missing test that breaks novm operation.
CMR:v1.7

This commit was SVN r27163.
2012-08-28 21:13:57 +00:00
Ralph Castain
cb48fd52d4 Implement the MPI_Info part of MPI-3 Ticket 313. Add an MPI_info object MPI_INFO_GET_ENV that contains a number of run-time related pieces of info. This includes all the required ones in the ticket, plus a few that specifically address recent user questions:
"num_app_ctx" - the number of app_contexts in the job
"first_rank" - the MPI rank of the first process in each app_context
"np" - the number of procs in each app_context

Still need clarification on the MPI_Init portion of the ticket. Specifically, does the ticket call for returning an error is someone calls MPI_Init more than once in a program? We set a flag to tell us that we have been initialized, but currently never check it.

This commit was SVN r27005.
2012-08-12 01:28:23 +00:00
George Bosilca
ba879c2c51 Remove the unused map.
This commit was SVN r26960.
2012-08-07 12:06:13 +00:00
Ralph Castain
53b1a1c976 Cleanly error out when someone asks to map-to <object> if that object doesn't exist on a node.
This commit was SVN r26950.
2012-08-04 21:52:36 +00:00
Ralph Castain
61b09a132b Fix bynode mapping of multiple app-contexts
This commit was SVN r26949.
2012-08-03 21:45:40 +00:00
Ralph Castain
96f6f94c24 Ensure we don't get trapped in an infinite loop when ranking bynode if something isn't right
This commit was SVN r26948.
2012-08-03 21:45:10 +00:00
Ralph Castain
431d5361ed For those who really preferred our prior mode of operation that mapped procs and only launched daemons on the nodes that had procs on them, introduce the "novm" state machine component. This recreates the old mode of operation by re-ordering the launch sequence so that we allocate, then map, and then launch daemons only on the reqd nodes (instead of across the entire allocation).
This commit was SVN r26946.
2012-08-03 16:30:05 +00:00
Shiqing Fan
12d99a9ebb Update the hwloc build on Windows and related files.
This commit was SVN r26818.
2012-07-20 12:14:28 +00:00
Ralph Castain
b0938a254e Dont use mutex where it isn't needed
This commit was SVN r26521.
2012-05-29 20:21:11 +00:00
Ralph Castain
32b66c166b Missed one blasted spot
This commit was SVN r26520.
2012-05-29 20:20:10 +00:00
Ralph Castain
9bedb25dda Cleanup some compiler warnings, some of which are actual logic errors
This commit was SVN r26519.
2012-05-29 20:11:51 +00:00
Jeff Squyres
2ba10c37fe Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
   wholly replaced by hwloc.
 * Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
 * Update sm, smcuda, wv, and openib components to no longer use carto.
   Instead, use hwloc data.  There are still optimizations possible in
   the sm/smcuda BTLs (i.e., making multiple mpools).  Also, the old
   carto-based code found out how many NUMA nodes were ''available''
   -- not how many were used ''in this job''.  The new hwloc-using
   code computes the same value -- it was not updated to calculate how
   many NUMA nodes are used ''by this job.''
   * Note that I cannot compile the smcuda and wv BTLs -- I ''think''
     they're right, but they need to be verified by their owners.
 * The openib component now does a bunch of stuff to figure out where
   "near" OpenFabrics devices are.  '''THIS IS A CHANGE IN DEFAULT
   BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
   (I do not have a NUMA machine with an OpenFabrics device that is a
   non-uniform distance from multiple different NUMA nodes).
 * Completely rewrite the OMPI_Affinity_str() routine from the
   "affinity" mpiext extension.  This extension now understands
   hyperthreads; the output format of it has changed a bit to reflect
   this new information.
 * Bunches of minor changes around the code base to update names/types
   from maffinity/paffinity-based names to hwloc-based names.
 * Add some helper functions into the hwloc base, mainly having to do
   with the fact that we have the hwloc data reporting ''all''
   topology information, but sometimes you really only want the
   (online | available) data.

This commit was SVN r26391.
2012-05-07 14:52:54 +00:00
Ralph Castain
bd8b4f7f1e Sorry for mid-day commit, but I had promised on the call to do this upon my return.
Roll in the ORTE state machine. Remove last traces of opal_sos. Remove UTK epoch code.

Please see the various emails about the state machine change for details. I'll send something out later with more info on the new arch.

This commit was SVN r26242.
2012-04-06 14:23:13 +00:00
Ralph Castain
811413e9bc Correctly handle multiple cpu-set ranges. Correctly support optional binding directives combined with cpu-set.
This commit was SVN r26187.
2012-03-23 14:50:41 +00:00
Ralph Castain
ce0caf7567 Support -cpu-set by binding to the specified cpus in the absence of any other binding directive. Allows users to subdivide nodes for multiple parallel mpirun invocations.
This commit was SVN r26186.
2012-03-23 14:05:52 +00:00
Ralph Castain
b3aabf1565 Cleanup the --without-hwloc build. Thanks to Paul Hargrove for reporting it broken.
This commit was SVN r25931.
2012-02-15 11:08:57 +00:00
Ralph Castain
bba6508b4b Handle the default hostfile case a little better...
This commit was SVN r25928.
2012-02-15 03:33:49 +00:00
Ralph Castain
f14c4be580 Correct the ordering logic so the list gets correctly built in daemon vpid order
This commit was SVN r25818.
2012-01-30 16:25:07 +00:00
Shiqing Fan
bfbd3c67a5 Add a windows file into the tarball.
This commit was SVN r25811.
2012-01-29 10:12:02 +00:00
Ralph Castain
3f31feee6f Handle the case where a user's rankfile specifies only cpus, and not socket:cpu pairs.
This commit was SVN r25803.
2012-01-27 12:21:45 +00:00
Ralph Castain
ef94e606c7 Add some debug
This commit was SVN r25791.
2012-01-26 19:23:32 +00:00
Shiqing Fan
2c9a4beffd Add and remove a few components for windows build.
This commit was SVN r25775.
2012-01-25 09:01:27 +00:00
Ralph Castain
477582abef Grrrr....fix ALL the cases where the membind warning occurs.
This commit was SVN r25715.
2012-01-11 23:51:18 +00:00
Ralph Castain
167ad944c4 Surprise, surprise - hwloc treats memory binding as at the thread, not process, level. Thus, hwloc always sets the membind proc-level support flag to false, and indicates actual memory binding support via the thread-level flag. So...just to be safe, test -both- flags and issue the "no support" warning ONLY if both are false.
This commit was SVN r25709.
2012-01-11 01:12:57 +00:00
Ralph Castain
2dd2694f25 Fix comm_spawn in oversubscribed conditions. IF oversubscription is allowed, let nodes flow into the mapper even if they are oversubscribed, constrained by the slots_max absolute ceiling. Cleanup error messages when comm_spawn fails so it correctly and succintly reports the ereror.
This commit was SVN r25659.
2011-12-15 18:04:48 +00:00
Ralph Castain
e683b2f9c7 Minor touchup - reset the pointer to the end of the list each time to ensure we get the nodes in correct daemon order
This commit was SVN r25651.
2011-12-14 22:16:52 +00:00
Ralph Castain
f531b09a8d Correctly handle -host and -hostfile options. Ensure the initial vm launch constrains itself to the union of specified hosts if those options are given. Get oversubscribe set correctly for that case.
This commit was SVN r25648.
2011-12-14 20:01:15 +00:00
George Bosilca
ac26f58bd7 I guess this wasn't yet ready for prime time.
This commit was SVN r25624.
2011-12-12 23:55:11 +00:00
Nathan Hjelm
885d5cbcf8 enable ptmalloc with using uGNI
This commit was SVN r25621.
2011-12-12 20:52:51 +00:00
Nathan Hjelm
be11acf727 bug fix. don't add node to allocated_nodes twice
This commit was SVN r25619.
2011-12-12 19:14:41 +00:00
Ralph Castain
7510339725 Remove stale orte_vm_launch param. Add a param that allows users to specify envars to forward/set so they can do it in the MCA param file instead of only via mpirun cmd line.
This commit was SVN r25580.
2011-12-06 21:31:22 +00:00
Ralph Castain
15facc4ba6 Fix comm_spawn yet again...add another test
This commit was SVN r25579.
2011-12-06 20:15:40 +00:00
Ralph Castain
90b7f2a7bf The rest of the multi app_context fix. Remove the restriction on number of app_contexts that can have zero np specified as multiple mappers now support that use-case. Update the ranking algorithms to respect and track bookmarks. Ensure we properly set the oversubscribed flag on a per-node basis.
This commit was SVN r25578.
2011-12-06 17:28:29 +00:00
Ralph Castain
d9c7764e9b Remove some debug
This commit was SVN r25575.
2011-12-05 22:04:50 +00:00
Ralph Castain
df2f594aa8 Some cleanup associated with multiple app_contexts. Ensure nodes only get entered once into the map. Correctly handle bookmarks. Cleanup tracking of slots_inuse and correct detection of oversubscription.
Still need to resolve the ranking issue so it starts at the bookmark, but that will come next.

This commit was SVN r25574.
2011-12-05 22:01:08 +00:00