2008-02-28 04:57:57 +03:00
|
|
|
/*
|
2010-03-13 02:57:50 +03:00
|
|
|
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
|
2008-02-28 04:57:57 +03:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2011-06-24 00:38:02 +04:00
|
|
|
* Copyright (c) 2004-2011 The University of Tennessee and The University
|
2008-02-28 04:57:57 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2010-08-09 23:28:56 +04:00
|
|
|
* Copyright (c) 2007-2010 Oracle and/or its affiliates. All rights reserved.
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
* Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
|
2013-01-18 09:00:05 +04:00
|
|
|
* Copyright (c) 2011-2013 Los Alamos National Security, LLC.
|
2012-04-06 18:23:13 +04:00
|
|
|
* All rights reserved.
|
2015-01-27 05:15:57 +03:00
|
|
|
* Copyright (c) 2013-2015 Intel, Inc. All rights reserved
|
2008-02-28 04:57:57 +03:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @file
|
|
|
|
*
|
|
|
|
* Global params for OpenRTE
|
|
|
|
*/
|
|
|
|
#ifndef ORTE_RUNTIME_ORTE_GLOBALS_H
|
|
|
|
#define ORTE_RUNTIME_ORTE_GLOBALS_H
|
|
|
|
|
|
|
|
#include "orte_config.h"
|
|
|
|
#include "orte/types.h"
|
|
|
|
|
|
|
|
#include <sys/types.h>
|
|
|
|
#ifdef HAVE_SYS_TIME_H
|
|
|
|
#include <sys/time.h>
|
|
|
|
#endif
|
|
|
|
|
2013-10-15 02:01:48 +04:00
|
|
|
#include "opal/class/opal_hash_table.h"
|
2008-02-28 08:32:23 +03:00
|
|
|
#include "opal/class/opal_pointer_array.h"
|
2009-06-17 06:54:20 +04:00
|
|
|
#include "opal/class/opal_value_array.h"
|
2011-06-30 07:12:38 +04:00
|
|
|
#include "opal/class/opal_ring_buffer.h"
|
2010-04-23 08:44:41 +04:00
|
|
|
#include "opal/threads/threads.h"
|
2012-04-06 18:23:13 +04:00
|
|
|
#include "opal/mca/event/event.h"
|
2011-09-11 23:02:24 +04:00
|
|
|
#include "opal/mca/hwloc/hwloc.h"
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
#include "opal/mca/hwloc/base/base.h"
|
2008-02-28 04:57:57 +03:00
|
|
|
|
|
|
|
#include "orte/mca/plm/plm_types.h"
|
2010-04-23 08:44:41 +04:00
|
|
|
#include "orte/mca/rml/rml_types.h"
|
2014-06-01 20:14:10 +04:00
|
|
|
#include "orte/util/attr.h"
|
2008-02-28 04:57:57 +03:00
|
|
|
#include "orte/util/proc_info.h"
|
2009-03-18 00:34:30 +03:00
|
|
|
#include "orte/util/name_fns.h"
|
2012-04-06 18:23:13 +04:00
|
|
|
#include "orte/util/error_strings.h"
|
2008-06-18 07:15:56 +04:00
|
|
|
#include "orte/runtime/runtime.h"
|
2008-02-28 04:57:57 +03:00
|
|
|
|
2008-09-01 21:15:01 +04:00
|
|
|
|
|
|
|
BEGIN_C_DECLS
|
|
|
|
|
2008-11-24 22:57:08 +03:00
|
|
|
ORTE_DECLSPEC extern int orte_debug_verbosity; /* instantiated in orte/runtime/orte_init.c */
|
|
|
|
ORTE_DECLSPEC extern char *orte_prohibited_session_dirs; /* instantiated in orte/runtime/orte_init.c */
|
|
|
|
ORTE_DECLSPEC extern bool orte_xml_output; /* instantiated in orte/runtime/orte_globals.c */
|
2009-09-02 22:03:10 +04:00
|
|
|
ORTE_DECLSPEC extern FILE *orte_xml_fp; /* instantiated in orte/runtime/orte_globals.c */
|
2008-11-24 22:57:08 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_help_want_aggregate; /* instantiated in orte/util/show_help.c */
|
2009-07-15 23:43:26 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_job_ident; /* instantiated in orte/runtime/orte_globals.c */
|
2010-03-02 18:18:33 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_create_session_dirs; /* instantiated in orte/runtime/orte_init.c */
|
2010-04-02 18:19:38 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_execute_quiet; /* instantiated in orte/runtime/orte_globals.c */
|
2010-10-16 07:29:47 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_report_silent_errors; /* instantiated in orte/runtime/orte_globals.c */
|
2012-04-06 18:23:13 +04:00
|
|
|
ORTE_DECLSPEC extern opal_event_base_t *orte_event_base; /* instantiated in orte/runtime/orte_init.c */
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_event_base_active; /* instantiated in orte/runtime/orte_init.c */
|
|
|
|
ORTE_DECLSPEC extern bool orte_proc_is_bound; /* instantiated in orte/runtime/orte_init.c */
|
2012-11-15 19:54:38 +04:00
|
|
|
ORTE_DECLSPEC extern int orte_progress_thread_debug; /* instantiated in orte/runtime/orte_init.c */
|
|
|
|
|
|
|
|
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
/**
|
|
|
|
* Global indicating where this process was bound to at launch (will
|
|
|
|
* be NULL if !orte_proc_is_bound)
|
|
|
|
*/
|
|
|
|
OPAL_DECLSPEC extern hwloc_cpuset_t orte_proc_applied_binding; /* instantiated in orte/runtime/orte_init.c */
|
|
|
|
#endif
|
|
|
|
|
2008-09-01 21:15:01 +04:00
|
|
|
|
|
|
|
/* Shortcut for some commonly used names */
|
|
|
|
#define ORTE_NAME_WILDCARD (&orte_name_wildcard)
|
2009-04-29 06:13:14 +04:00
|
|
|
ORTE_DECLSPEC extern orte_process_name_t orte_name_wildcard; /** instantiated in orte/runtime/orte_init.c */
|
2008-09-01 21:15:01 +04:00
|
|
|
#define ORTE_NAME_INVALID (&orte_name_invalid)
|
2009-04-29 06:13:14 +04:00
|
|
|
ORTE_DECLSPEC extern orte_process_name_t orte_name_invalid; /** instantiated in orte/runtime/orte_init.c */
|
2008-09-01 21:15:01 +04:00
|
|
|
|
2009-03-06 00:56:03 +03:00
|
|
|
#define ORTE_PROC_MY_NAME (&orte_process_info.my_name)
|
2008-09-01 21:15:01 +04:00
|
|
|
|
2011-06-24 00:38:02 +04:00
|
|
|
/* define a special name that point to my parent (aka the process that spawned me) */
|
|
|
|
#define ORTE_PROC_MY_PARENT (&orte_process_info.my_parent)
|
|
|
|
|
2008-09-01 21:15:01 +04:00
|
|
|
/* define a special name that belongs to orterun */
|
2009-03-06 00:56:03 +03:00
|
|
|
#define ORTE_PROC_MY_HNP (&orte_process_info.my_hnp)
|
2008-09-01 21:15:01 +04:00
|
|
|
|
|
|
|
/* define the name of my daemon */
|
2009-03-06 00:56:03 +03:00
|
|
|
#define ORTE_PROC_MY_DAEMON (&orte_process_info.my_daemon)
|
2008-09-01 21:15:01 +04:00
|
|
|
|
2014-02-14 03:30:04 +04:00
|
|
|
/* define the name of my scheduler */
|
|
|
|
#define ORTE_PROC_MY_SCHEDULER (&orte_process_info.my_scheduler)
|
|
|
|
|
2008-09-01 21:15:01 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_in_parallel_debugger;
|
|
|
|
|
2010-01-07 21:14:03 +03:00
|
|
|
/* error manager callback function */
|
|
|
|
typedef void (*orte_err_cb_fn_t)(orte_process_name_t *proc, orte_proc_state_t state, void *cbdata);
|
|
|
|
|
2014-06-17 21:57:51 +04:00
|
|
|
/* define an object for timer events */
|
|
|
|
typedef struct {
|
|
|
|
opal_object_t super;
|
|
|
|
struct timeval tv;
|
|
|
|
opal_event_t *ev;
|
|
|
|
void *payload;
|
|
|
|
} orte_timer_t;
|
|
|
|
OBJ_CLASS_DECLARATION(orte_timer_t);
|
|
|
|
|
2010-08-18 01:51:38 +04:00
|
|
|
ORTE_DECLSPEC extern int orte_exit_status;
|
|
|
|
|
2012-04-11 01:50:01 +04:00
|
|
|
/* ORTE event priorities - we define these
|
|
|
|
* at levels that permit higher layers such as
|
|
|
|
* OMPI to handle their events at higher priority,
|
|
|
|
* with the exception of errors. Errors generally
|
|
|
|
* require exception handling (e.g., ctrl-c termination)
|
|
|
|
* that overrides the need to process MPI messages
|
|
|
|
*/
|
|
|
|
#define ORTE_ERROR_PRI OPAL_EV_ERROR_PRI
|
|
|
|
#define ORTE_MSG_PRI OPAL_EV_MSG_LO_PRI
|
|
|
|
#define ORTE_SYS_PRI OPAL_EV_SYS_LO_PRI
|
|
|
|
#define ORTE_INFO_PRI OPAL_EV_INFO_LO_PRI
|
|
|
|
|
Per the meeting on moving the BTLs to OPAL, move the ORTE database "db" framework to OPAL so the relocated BTLs can access it. Because the data is indexed by process, this requires that we define a new "opal_identifier_t" that corresponds to the orte_process_name_t struct. In order to support multiple run-times, this is defined in opal/mca/db/db_types.h as a uint64_t without identifying the meaning of any part of that data.
A few changes were required to support this move:
1. the PMI component used to identify rte-related data (e.g., host name, bind level) and package them as a unit to reduce the number of PMI keys. This code was moved up to the ORTE layer as the OPAL layer has no understanding of these concepts. In addition, the component locally stored data based on process jobid/vpid - this could no longer be supported (see below for the solution).
2. the hash component was updated to use the new opal_identifier_t instead of orte_process_name_t as its index for storing data in the hash tables. Previously, we did a hash on the vpid and stored the data in a 32-bit hash table. In the revised system, we don't see a separate "vpid" field - we only have a 64-bit opaque value. The orte_process_name_t hash turned out to do nothing useful, so we now store the data in a 64-bit hash table. Preliminary tests didn't show any identifiable change in behavior or performance, but we'll have to see if a move back to the 32-bit table is required at some later time.
3. the db framework was a "select one" system. However, since the PMI component could no longer use its internal storage system, the framework has now been changed to a "select many" mode of operation. This allows the hash component to handle all internal storage, while the PMI component only handles pushing/pulling things from the PMI system. This was something we had planned for some time - when fetching data, we first check internal storage to see if we already have it, and then automatically go to the global system to look for it if we don't. Accordingly, the framework was provided with a custom query function used during "select" that lets you seperately specify the "store" and "fetch" ordering.
4. the ORTE grpcomm and ess/pmi components, and the nidmap code, were updated to work with the new db framework and to specify internal/global storage options.
No changes were made to the MPI layer, except for modifying the ORTE component of the OMPI/rte framework to support the new db framework.
This commit was SVN r28112.
2013-02-26 21:50:04 +04:00
|
|
|
/* define some common keys used in ORTE */
|
|
|
|
#define ORTE_DB_DAEMON_VPID "orte.daemon.vpid"
|
|
|
|
|
2012-04-11 01:50:01 +04:00
|
|
|
/* State Machine lists */
|
|
|
|
ORTE_DECLSPEC extern opal_list_t orte_job_states;
|
|
|
|
ORTE_DECLSPEC extern opal_list_t orte_proc_states;
|
|
|
|
|
|
|
|
/* a clean output channel without prefix */
|
|
|
|
ORTE_DECLSPEC extern int orte_clean_output;
|
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
#define ORTE_GLOBAL_ARRAY_BLOCK_SIZE 64
|
|
|
|
#define ORTE_GLOBAL_ARRAY_MAX_SIZE INT_MAX
|
|
|
|
|
2008-03-05 04:46:30 +03:00
|
|
|
/* define a default error return code for ORTE */
|
|
|
|
#define ORTE_ERROR_DEFAULT_EXIT_CODE 1
|
2008-02-28 04:57:57 +03:00
|
|
|
|
2008-08-05 19:09:29 +04:00
|
|
|
/**
|
|
|
|
* Define a macro for updating the orte_exit_status
|
|
|
|
* The macro provides a convenient way of doing this
|
|
|
|
* so that we can add thread locking at some point
|
|
|
|
* since the orte_exit_status is a global variable.
|
|
|
|
*
|
|
|
|
* Ensure that we do not overwrite the exit status if it has
|
|
|
|
* already been set to some non-zero value. If we don't make
|
|
|
|
* this check, then different parts of the code could overwrite
|
|
|
|
* each other's exit status in the case of abnormal termination.
|
|
|
|
*
|
|
|
|
* For example, if a process aborts, we would record the initial
|
|
|
|
* exit code from the aborted process. However, subsequent processes
|
|
|
|
* will have been aborted by signal as we kill the job. We don't want
|
|
|
|
* the subsequent processes to overwrite the original exit code so
|
|
|
|
* we can tell the user the exit code from the process that caused
|
|
|
|
* the whole thing to happen.
|
|
|
|
*/
|
|
|
|
#define ORTE_UPDATE_EXIT_STATUS(newstatus) \
|
|
|
|
do { \
|
|
|
|
if (0 == orte_exit_status && 0 != newstatus) { \
|
|
|
|
OPAL_OUTPUT_VERBOSE((1, orte_debug_output, \
|
|
|
|
"%s:%s(%d) updating exit status to %d", \
|
2009-03-06 00:50:47 +03:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), \
|
2008-08-05 19:09:29 +04:00
|
|
|
__FILE__, __LINE__, newstatus)); \
|
|
|
|
orte_exit_status = newstatus; \
|
|
|
|
} \
|
|
|
|
} while(0);
|
|
|
|
|
2009-09-09 09:28:45 +04:00
|
|
|
/* sometimes we need to reset the exit status - for example, when we
|
|
|
|
* are restarting a failed process
|
|
|
|
*/
|
|
|
|
#define ORTE_RESET_EXIT_STATUS() \
|
|
|
|
do { \
|
|
|
|
OPAL_OUTPUT_VERBOSE((1, orte_debug_output, \
|
|
|
|
"%s:%s(%d) reseting exit status", \
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), \
|
|
|
|
__FILE__, __LINE__)); \
|
|
|
|
orte_exit_status = 0; \
|
|
|
|
} while(0);
|
|
|
|
|
2008-08-05 19:09:29 +04:00
|
|
|
|
2009-01-12 22:12:58 +03:00
|
|
|
/* define a macro for computing time differences - used for timing tests
|
|
|
|
* across the code base
|
|
|
|
*/
|
|
|
|
#define ORTE_COMPUTE_TIME_DIFF(r, ur, s1, us1, s2, us2) \
|
|
|
|
do { \
|
|
|
|
(r) = (s2) - (s1); \
|
|
|
|
if ((us2) >= (us1)) { \
|
|
|
|
(ur) = (us2) - (us1); \
|
|
|
|
} else { \
|
|
|
|
(r)--; \
|
|
|
|
(ur) = 1000000 - (us1) + (us2); \
|
|
|
|
} \
|
|
|
|
} while(0);
|
|
|
|
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
/* define a set of flags to control the launch of a job */
|
|
|
|
typedef uint16_t orte_job_controls_t;
|
|
|
|
#define ORTE_JOB_CONTROL OPAL_UINT16
|
|
|
|
|
2012-05-03 01:00:22 +04:00
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
/* global type definitions used by RTE - instanced in orte_globals.c */
|
|
|
|
|
|
|
|
/************
|
|
|
|
* Declare this to allow us to use it before fully
|
|
|
|
* defining it - resolves potential circular definition
|
|
|
|
*/
|
|
|
|
struct orte_proc_t;
|
2009-08-11 06:51:27 +04:00
|
|
|
struct orte_job_map_t;
|
2008-02-28 04:57:57 +03:00
|
|
|
/************/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Information about a specific application to be launched in the RTE.
|
|
|
|
*/
|
|
|
|
typedef struct {
|
|
|
|
/** Parent object */
|
|
|
|
opal_object_t super;
|
|
|
|
/** Unique index when multiple apps per job */
|
2010-02-27 20:37:34 +03:00
|
|
|
orte_app_idx_t idx;
|
2008-02-28 04:57:57 +03:00
|
|
|
/** Absolute pathname of argv[0] */
|
|
|
|
char *app;
|
|
|
|
/** Number of copies of this process that are to be launched */
|
|
|
|
orte_std_cntr_t num_procs;
|
2012-08-29 01:20:17 +04:00
|
|
|
/** Array of pointers to the proc objects for procs of this app_context
|
|
|
|
* NOTE - not always used
|
|
|
|
*/
|
|
|
|
opal_pointer_array_t procs;
|
|
|
|
/** State of the app_context */
|
|
|
|
orte_app_state_t state;
|
2012-08-12 05:28:23 +04:00
|
|
|
/** First MPI rank of this app_context in the job */
|
|
|
|
orte_vpid_t first_rank;
|
2008-02-28 04:57:57 +03:00
|
|
|
/** Standard argv-style array, including a final NULL pointer */
|
|
|
|
char **argv;
|
|
|
|
/** Standard environ-style array, including a final NULL pointer */
|
|
|
|
char **env;
|
|
|
|
/** Current working directory for this app */
|
|
|
|
char *cwd;
|
2014-06-01 20:14:10 +04:00
|
|
|
/* flags */
|
|
|
|
orte_app_context_flags_t flags;
|
|
|
|
/* provide a list of attributes for this app_context in place
|
|
|
|
* of having a continually-expanding list of fixed-use values.
|
|
|
|
* This is a list of opal_value_t's, with the intent of providing
|
|
|
|
* flexibility without constantly expanding the memory footprint
|
|
|
|
* every time we want some new (rarely used) option
|
2011-05-29 02:18:19 +04:00
|
|
|
*/
|
2014-06-01 20:14:10 +04:00
|
|
|
opal_list_t attributes;
|
2008-02-28 04:57:57 +03:00
|
|
|
} orte_app_context_t;
|
|
|
|
|
|
|
|
ORTE_DECLSPEC OBJ_CLASS_DECLARATION(orte_app_context_t);
|
|
|
|
|
|
|
|
|
|
|
|
typedef struct {
|
|
|
|
/** Base object so this can be put on a list */
|
|
|
|
opal_list_item_t super;
|
|
|
|
/* index of this node object in global array */
|
|
|
|
orte_std_cntr_t index;
|
|
|
|
/** String node name */
|
|
|
|
char *name;
|
2008-04-30 23:49:53 +04:00
|
|
|
/* daemon on this node */
|
2008-02-28 04:57:57 +03:00
|
|
|
struct orte_proc_t *daemon;
|
|
|
|
/** number of procs on this node */
|
|
|
|
orte_vpid_t num_procs;
|
|
|
|
/* array of pointers to procs on this node */
|
2008-02-28 08:32:23 +03:00
|
|
|
opal_pointer_array_t *procs;
|
2008-04-30 23:49:53 +04:00
|
|
|
/* next node rank on this node */
|
2008-09-25 17:39:08 +04:00
|
|
|
orte_node_rank_t next_node_rank;
|
2008-02-28 04:57:57 +03:00
|
|
|
/** State of this node */
|
|
|
|
orte_node_state_t state;
|
|
|
|
/** A "soft" limit on the number of slots available on the node.
|
|
|
|
This will typically correspond to the number of physical CPUs
|
|
|
|
that we have been allocated on this note and would be the
|
|
|
|
"ideal" number of processes for us to launch. */
|
|
|
|
orte_std_cntr_t slots;
|
|
|
|
/** How many processes have already been launched, used by one or
|
|
|
|
more jobs on this node. */
|
|
|
|
orte_std_cntr_t slots_inuse;
|
|
|
|
/** A "hard" limit (if set -- a value of 0 implies no hard limit)
|
|
|
|
on the number of slots that can be allocated on a given
|
|
|
|
node. This is for some environments (e.g. grid) there may be
|
|
|
|
fixed limits on the number of slots that can be used.
|
|
|
|
|
|
|
|
This value also could have been a boolean - but we may want to
|
|
|
|
allow the hard limit be different than the soft limit - in
|
|
|
|
other words allow the node to be oversubscribed up to a
|
|
|
|
specified limit. For example, if we have two processors, we
|
|
|
|
may want to allow up to four processes but no more. */
|
|
|
|
orte_std_cntr_t slots_max;
|
2011-09-11 23:02:24 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
/* system topology for this node */
|
|
|
|
hwloc_topology_t topology;
|
|
|
|
#endif
|
2014-06-01 20:14:10 +04:00
|
|
|
/* flags */
|
|
|
|
orte_node_flags_t flags;
|
|
|
|
/* list of orte_attribute_t */
|
|
|
|
opal_list_t attributes;
|
2008-02-28 04:57:57 +03:00
|
|
|
} orte_node_t;
|
|
|
|
ORTE_DECLSPEC OBJ_CLASS_DECLARATION(orte_node_t);
|
|
|
|
|
|
|
|
typedef struct {
|
|
|
|
/** Base object so this can be put on a list */
|
|
|
|
opal_list_item_t super;
|
2015-01-27 05:15:57 +03:00
|
|
|
/* personality for this job */
|
|
|
|
char *personality;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* jobid for this job */
|
|
|
|
orte_jobid_t jobid;
|
Per the PMIx RFC:
WHAT: Merge the PMIx branch into the devel repo, creating a new
OPAL “lmix” framework to abstract PMI support for all RTEs.
Replace the ORTE daemon-level collectives with a new PMIx
server and update the ORTE grpcomm framework to support
server-to-server collectives
WHY: We’ve had problems dealing with variations in PMI implementations,
and need to extend the existing PMI definitions to meet exascale
requirements.
WHEN: Mon, Aug 25
WHERE: https://github.com/rhc54/ompi-svn-mirror.git
Several community members have been working on a refactoring of the current PMI support within OMPI. Although the APIs are common, Slurm and Cray implement a different range of capabilities, and package them differently. For example, Cray provides an integrated PMI-1/2 library, while Slurm separates the two and requires the user to specify the one to be used at runtime. In addition, several bugs in the Slurm implementations have caused problems requiring extra coding.
All this has led to a slew of #if’s in the PMI code and bugs when the corner-case logic for one implementation accidentally traps the other. Extending this support to other implementations would have increased this complexity to an unacceptable level.
Accordingly, we have:
* created a new OPAL “pmix” framework to abstract the PMI support, with separate components for Cray, Slurm PMI-1, and Slurm PMI-2 implementations.
* Replaced the current ORTE grpcomm daemon-based collective operation with an integrated PMIx server, and updated the grpcomm APIs to provide more flexible, multi-algorithm support for collective operations. At this time, only the xcast and allgather operations are supported.
* Replaced the current global collective id with a signature based on the names of the participating procs. The allows an unlimited number of collectives to be executed by any group of processes, subject to the requirement that only one collective can be active at a time for a unique combination of procs. Note that a proc can be involved in any number of simultaneous collectives - it is the specific combination of procs that is subject to the constraint
* removed the prior OMPI/OPAL modex code
* added new macros for executing modex send/recv to simplify use of the new APIs. The send macros allow the caller to specify whether or not the BTL supports async modex operations - if so, then the non-blocking “fence” operation is used, if the active PMIx component supports it. Otherwise, the default is a full blocking modex exchange as we currently perform.
* retained the current flag that directs us to use a blocking fence operation, but only to retrieve data upon demand
This commit was SVN r32570.
2014-08-21 22:56:47 +04:00
|
|
|
/* offset to the total number of procs so shared memory
|
|
|
|
* components can potentially connect to any spawned jobs*/
|
2013-11-14 21:01:43 +04:00
|
|
|
orte_vpid_t offset;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* app_context array for this job */
|
2008-02-28 08:32:23 +03:00
|
|
|
opal_pointer_array_t *apps;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* number of app_contexts in the array */
|
2010-02-27 20:37:34 +03:00
|
|
|
orte_app_idx_t num_apps;
|
Roll in the revamped IOF subsystem. Per the devel mailing list email, this is a complete rewrite of the iof framework designed to simplify the code for maintainability, and to support features we had planned to do, but were too difficult to implement in the old code. Specifically, the new code:
1. completely and cleanly separates responsibilities between the HNP, orted, and tool components.
2. removes all wireup messaging during launch and shutdown.
3. maintains flow control for stdin to avoid large-scale consumption of memory by orteds when large input files are forwarded. This is done using an xon/xoff protocol.
4. enables specification of stdin recipients on the mpirun cmd line. Allowed options include rank, "all", or "none". Default is rank 0.
5. creates a new MPI_Info key "ompi_stdin_target" that supports the above options for child jobs. Default is "none".
6. adds a new tool "orte-iof" that can connect to a running mpirun and display the output. Cmd line options allow selection of any combination of stdout, stderr, and stddiag. Default is stdout.
7. adds a new mpirun and orte-iof cmd line option "tag-output" that will tag each line of output with process name and stream ident. For example, "[1,0]<stdout>this is output"
This is not intended for the 1.3 release as it is a major change requiring considerable soak time.
This commit was SVN r19767.
2008-10-18 04:00:49 +04:00
|
|
|
/* rank desiring stdin - for now, either one rank, all ranks
|
|
|
|
* (wildcard), or none (invalid)
|
|
|
|
*/
|
|
|
|
orte_vpid_t stdin_target;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* total slots allocated to this job */
|
|
|
|
orte_std_cntr_t total_slots_alloc;
|
|
|
|
/* number of procs in this job */
|
|
|
|
orte_vpid_t num_procs;
|
|
|
|
/* array of pointers to procs in this job */
|
2008-02-28 08:32:23 +03:00
|
|
|
opal_pointer_array_t *procs;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* map of the job */
|
2009-08-11 06:51:27 +04:00
|
|
|
struct orte_job_map_t *map;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* bookmark for where we are in mapping - this
|
|
|
|
* indicates the node where we stopped
|
|
|
|
*/
|
|
|
|
orte_node_t *bookmark;
|
|
|
|
/* state of the overall job */
|
|
|
|
orte_job_state_t state;
|
2012-08-30 00:35:52 +04:00
|
|
|
/* number of procs mapped */
|
|
|
|
orte_vpid_t num_mapped;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* number of procs launched */
|
|
|
|
orte_vpid_t num_launched;
|
|
|
|
/* number of procs reporting contact info */
|
|
|
|
orte_vpid_t num_reported;
|
|
|
|
/* number of procs terminated */
|
|
|
|
orte_vpid_t num_terminated;
|
2010-04-23 08:44:41 +04:00
|
|
|
/* number of daemons reported launched so we can track progress */
|
|
|
|
orte_vpid_t num_daemons_reported;
|
2012-04-06 18:23:13 +04:00
|
|
|
/* originator of a dynamic spawn */
|
|
|
|
orte_process_name_t originator;
|
2014-06-01 20:14:10 +04:00
|
|
|
/* number of local procs */
|
2012-04-06 18:23:13 +04:00
|
|
|
orte_vpid_t num_local_procs;
|
2014-06-01 20:14:10 +04:00
|
|
|
/* flags */
|
|
|
|
orte_job_flags_t flags;
|
|
|
|
/* attributes */
|
|
|
|
opal_list_t attributes;
|
2008-02-28 04:57:57 +03:00
|
|
|
} orte_job_t;
|
|
|
|
ORTE_DECLSPEC OBJ_CLASS_DECLARATION(orte_job_t);
|
|
|
|
|
|
|
|
struct orte_proc_t {
|
|
|
|
/** Base object so this can be put on a list */
|
|
|
|
opal_list_item_t super;
|
|
|
|
/* process name */
|
|
|
|
orte_process_name_t name;
|
2014-06-01 20:14:10 +04:00
|
|
|
/* the vpid of my parent - the daemon vpid for an app
|
|
|
|
* or the vpid of the parent in the routing tree of
|
|
|
|
* a daemon */
|
2013-01-08 08:41:12 +04:00
|
|
|
orte_vpid_t parent;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* pid */
|
|
|
|
pid_t pid;
|
2008-04-30 23:49:53 +04:00
|
|
|
/* local rank amongst my peers on the node
|
|
|
|
* where this is running - this value is
|
|
|
|
* needed by MPI procs so that the lowest
|
|
|
|
* rank on a node can perform certain fns -
|
|
|
|
* e.g., open an sm backing file
|
|
|
|
*/
|
2008-09-25 17:39:08 +04:00
|
|
|
orte_local_rank_t local_rank;
|
2008-04-30 23:49:53 +04:00
|
|
|
/* local rank on the node across all procs
|
|
|
|
* and jobs known to this HNP - this is
|
|
|
|
* needed so that procs can do things like
|
|
|
|
* know which static IP port to use
|
|
|
|
*/
|
2008-09-25 17:39:08 +04:00
|
|
|
orte_node_rank_t node_rank;
|
2011-06-17 00:31:30 +04:00
|
|
|
/* rank of this proc within its app context - this
|
|
|
|
* will just equal its vpid for single app_context
|
|
|
|
* applications
|
|
|
|
*/
|
|
|
|
int32_t app_rank;
|
2010-03-24 00:28:02 +03:00
|
|
|
/* Last state used to trigger the errmgr for this proc */
|
|
|
|
orte_proc_state_t last_errmgr_state;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* process state */
|
|
|
|
orte_proc_state_t state;
|
|
|
|
/* exit code */
|
|
|
|
orte_exit_code_t exit_code;
|
|
|
|
/* the app_context that generated this proc */
|
2010-02-27 20:37:34 +03:00
|
|
|
orte_app_idx_t app_idx;
|
2008-02-28 04:57:57 +03:00
|
|
|
/* pointer to the node where this proc is executing */
|
|
|
|
orte_node_t *node;
|
|
|
|
/* RML contact info */
|
|
|
|
char *rml_uri;
|
2014-06-01 20:14:10 +04:00
|
|
|
/* some boolean flags */
|
|
|
|
orte_proc_flags_t flags;
|
|
|
|
/* list of opal_value_t attributes */
|
|
|
|
opal_list_t attributes;
|
2008-02-28 04:57:57 +03:00
|
|
|
};
|
|
|
|
typedef struct orte_proc_t orte_proc_t;
|
|
|
|
ORTE_DECLSPEC OBJ_CLASS_DECLARATION(orte_proc_t);
|
|
|
|
|
2014-12-09 02:33:45 +03:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
/* define an object for storing node topologies */
|
|
|
|
typedef struct {
|
|
|
|
opal_object_t super;
|
|
|
|
hwloc_topology_t topo;
|
|
|
|
char *sig;
|
|
|
|
} orte_topology_t;
|
|
|
|
ORTE_DECLSPEC OBJ_CLASS_DECLARATION(orte_topology_t);
|
|
|
|
#endif
|
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
/**
|
2012-06-27 18:53:55 +04:00
|
|
|
* Get a job data object
|
2008-02-28 04:57:57 +03:00
|
|
|
* We cannot just reference a job data object with its jobid as
|
|
|
|
* the jobid is no longer an index into the array. This change
|
|
|
|
* was necessitated by modification of the jobid to include
|
|
|
|
* an mpirun-unique qualifer to eliminate any global name
|
|
|
|
* service
|
|
|
|
*/
|
|
|
|
ORTE_DECLSPEC orte_job_t* orte_get_job_data_object(orte_jobid_t job);
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/**
|
|
|
|
* Get a proc data object
|
|
|
|
*/
|
|
|
|
ORTE_DECLSPEC orte_proc_t* orte_get_proc_object(orte_process_name_t *proc);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Get the daemon vpid hosting a given proc
|
|
|
|
*/
|
|
|
|
ORTE_DECLSPEC orte_vpid_t orte_get_proc_daemon_vpid(orte_process_name_t *proc);
|
|
|
|
|
|
|
|
/* Get the hostname of a proc */
|
|
|
|
ORTE_DECLSPEC char* orte_get_proc_hostname(orte_process_name_t *proc);
|
|
|
|
|
|
|
|
/* get the node rank of a proc */
|
|
|
|
ORTE_DECLSPEC orte_node_rank_t orte_get_proc_node_rank(orte_process_name_t *proc);
|
|
|
|
|
2011-02-14 22:45:59 +03:00
|
|
|
/* Find the lowest vpid alive in a given job */
|
|
|
|
ORTE_DECLSPEC orte_vpid_t orte_get_lowest_vpid_alive(orte_jobid_t job);
|
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
/* global variables used by RTE - instanced in orte_globals.c */
|
2009-08-20 15:12:45 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_debug_daemons_flag;
|
|
|
|
ORTE_DECLSPEC extern bool orte_debug_daemons_file_flag;
|
2008-08-14 22:59:01 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_leave_session_attached;
|
2008-04-17 17:50:59 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_do_not_launch;
|
2008-02-28 04:57:57 +03:00
|
|
|
ORTE_DECLSPEC extern bool orted_spin_flag;
|
2010-08-09 23:28:56 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_local_cpu_type;
|
2009-12-01 02:11:25 +03:00
|
|
|
ORTE_DECLSPEC extern char *orte_local_cpu_model;
|
2010-07-18 01:03:27 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_basename;
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_coprocessors_detected;
|
2013-10-15 02:01:48 +04:00
|
|
|
ORTE_DECLSPEC extern opal_hash_table_t *orte_coprocessors;
|
2014-12-09 02:33:45 +03:00
|
|
|
ORTE_DECLSPEC extern char *orte_topo_signature;
|
2009-08-21 22:03:34 +04:00
|
|
|
|
|
|
|
/* ORTE OOB port flags */
|
2008-03-28 05:20:37 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_static_ports;
|
2009-08-21 22:03:34 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_oob_static_ports;
|
2009-08-22 06:58:20 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_standalone_operation;
|
2009-08-21 22:03:34 +04:00
|
|
|
|
2012-11-16 08:04:29 +04:00
|
|
|
/* nodename flags */
|
2008-04-02 00:32:17 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_keep_fqdn_hostnames;
|
2011-12-01 18:24:43 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_have_fqdn_allocation;
|
2008-11-24 22:57:08 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_show_resolved_nodenames;
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_retain_aliases;
|
2013-01-18 09:00:05 +04:00
|
|
|
ORTE_DECLSPEC extern int orte_use_hostname_alias;
|
2012-11-16 08:04:29 +04:00
|
|
|
|
|
|
|
/* debug flags */
|
2008-05-29 17:38:27 +04:00
|
|
|
ORTE_DECLSPEC extern int orted_debug_failure;
|
2008-06-03 01:46:34 +04:00
|
|
|
ORTE_DECLSPEC extern int orted_debug_failure_delay;
|
2012-11-16 08:04:29 +04:00
|
|
|
|
|
|
|
/* homegeneity flags */
|
2008-06-24 21:50:56 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_hetero_apps;
|
2011-11-01 22:43:10 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_hetero_nodes;
|
2012-11-16 08:04:29 +04:00
|
|
|
|
2008-08-19 19:19:30 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_never_launched;
|
2008-09-23 19:46:34 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_devel_level_output;
|
2011-10-29 19:12:45 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_display_topo_with_map;
|
2011-11-03 18:22:07 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_display_diffable_output;
|
2008-02-28 04:57:57 +03:00
|
|
|
|
|
|
|
ORTE_DECLSPEC extern char **orte_launch_environ;
|
2008-04-14 22:26:08 +04:00
|
|
|
|
2008-07-25 21:13:22 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_hnp_is_allocated;
|
2008-08-04 18:25:19 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_allocation_required;
|
2012-09-04 20:34:05 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_managed_allocation;
|
If (and only if) a user requests, set the default number of slots on any node to the number of objects of the specified type. This *only* takes effect in an unmanaged environment - i.e., if an external resource manager assigns us a number of slots, then that is what we use. However, if we are using a hostfile, then the user may or may not have given us a value for the number of slots on each node.
For those nodes (and *only* those nodes) where the user does *not* specify a slot count, we will set the number of slots according to their direction: either to the number of cores, numas, sockets, or hwthreads. Otherwise, the slot count is set to 1.
Note that the default behavior remains unchanged: in the absence of any value for #slots, and in the absence of any directive to set #slots, we will set #slots=1.
This commit was SVN r27236.
2012-09-05 00:58:26 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_set_slots;
|
|
|
|
ORTE_DECLSPEC extern bool orte_display_allocation;
|
|
|
|
ORTE_DECLSPEC extern bool orte_display_devel_allocation;
|
2012-09-05 22:42:09 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_soft_locations;
|
2013-10-04 06:58:26 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_hnp_connected;
|
2008-07-25 21:13:22 +04:00
|
|
|
|
2011-06-30 07:12:38 +04:00
|
|
|
/* launch agents */
|
Per the July technical meeting:
Standardize the handling of the orte launch agent option across PLMs. This has been a consistent complaint I have received - each PLM would register its own MCA param to get input on the launch agent for remote nodes (in fact, one or two didn't, but most did). This would then get handled in various and contradictory ways.
Some PLMs would accept only a one-word input. Others accepted multi-word args such as "valgrind orted", but then some would error by putting any prefix specified on the cmd line in front of the incorrect argument.
For example, while using the rsh launcher, if you specified "valgrind orted" as your launch agent and had "--prefix foo" on you cmd line, you would attempt to execute "ssh foo/valgrind orted" - which obviously wouldn't work.
This was all -very- confusing to users, who had to know which PLM was being used so they could even set the right mca param in the first place! And since we don't warn about non-recognized or non-used mca params, half of the time they would wind up not doing what they thought they were telling us to do.
To solve this problem, we did the following:
1. removed all mca params from the individual plms for the launch agent
2. added a new mca param "orte_launch_agent" for this purpose. To further simplify for users, this comes with a new cmd line option "--launch-agent" that can take a multi-word string argument. The value of the param defaults to "orted".
3. added a PLM base function that processes the orte_launch_agent value and adds the contents to a provided argv array. This can subsequently be harvested at-will to handle multi-word values
4. modified the PLMs to use this new function. All the PLMs except for the rsh PLM required very minor change - just called the function and moved on. The rsh PLM required much larger changes as - because of the rsh/ssh cmd line limitations - we had to correctly prepend any provided prefix to the correct argv entry.
5. added a new opal_argv_join_range function that allows the caller to "join" argv entries between two specified indices
Please let me know of any problems. I tried to make this as clean as possible, but cannot compile all PLMs to ensure all is correct.
This commit was SVN r19097.
2008-07-30 22:26:24 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_launch_agent;
|
2008-02-28 04:57:57 +03:00
|
|
|
ORTE_DECLSPEC extern char **orted_cmd_line;
|
2011-06-30 07:12:38 +04:00
|
|
|
ORTE_DECLSPEC extern char **orte_fork_agent;
|
2008-08-05 19:09:29 +04:00
|
|
|
|
2010-10-23 00:07:24 +04:00
|
|
|
/* debugger job */
|
2012-01-11 19:53:09 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_debugger_dump_proctable;
|
|
|
|
ORTE_DECLSPEC extern char *orte_debugger_test_daemon;
|
|
|
|
ORTE_DECLSPEC extern bool orte_debugger_test_attach;
|
|
|
|
ORTE_DECLSPEC extern int orte_debugger_check_rate;
|
2008-08-13 21:47:24 +04:00
|
|
|
|
2010-07-18 01:03:27 +04:00
|
|
|
/* exit flags */
|
2008-02-28 04:57:57 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_abnormal_term_ordered;
|
2008-11-01 00:10:00 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_routing_is_enabled;
|
2009-02-27 13:16:25 +03:00
|
|
|
ORTE_DECLSPEC extern bool orte_job_term_ordered;
|
2010-05-23 06:57:03 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_orteds_term_ordered;
|
2012-11-10 18:09:12 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_allowed_exit_without_sync;
|
2008-06-03 01:46:34 +04:00
|
|
|
ORTE_DECLSPEC extern int orte_startup_timeout;
|
Afraid this has a couple of things mixed into the commit. Couldn't be helped - had missed one commit prior to running out the door on vacation.
Fix race conditions in abnormal terminations. We had done a first-cut at this in a prior commit. However, the window remained partially open due to the fact that the HNP has multiple paths leading to orte_finalize. Most of our frameworks don't care if they are finalized more than once, but one of them does, which meant we segfaulted if orte_finalize got called more than once. Besides, we really shouldn't be doing that anyway.
So we now introduce a set of atomic locks that prevent us from multiply calling abort, attempting to call orte_finalize, etc. My initial tests indicate this is working cleanly, but since it is a race condition issue, more testing will have to be done before we know for sure that this problem has been licked.
Also, some updates relevant to the tool comm library snuck in here. Since those also touched the orted code (as did the prior changes), I didn't want to attempt to separate them out - besides, they are coming in soon anyway. More on them later as that functionality approaches completion.
This commit was SVN r17843.
2008-03-17 20:58:59 +03:00
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
ORTE_DECLSPEC extern int orte_timeout_usec_per_proc;
|
|
|
|
ORTE_DECLSPEC extern float orte_max_timeout;
|
2013-12-07 05:58:32 +04:00
|
|
|
ORTE_DECLSPEC extern orte_timer_t *orte_mpiexec_timeout;
|
2008-05-01 23:19:34 +04:00
|
|
|
ORTE_DECLSPEC extern opal_buffer_t *orte_tree_launch_cmd;
|
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
/* global arrays for data storage */
|
2008-02-28 08:32:23 +03:00
|
|
|
ORTE_DECLSPEC extern opal_pointer_array_t *orte_job_data;
|
|
|
|
ORTE_DECLSPEC extern opal_pointer_array_t *orte_node_pool;
|
2011-09-11 23:02:24 +04:00
|
|
|
ORTE_DECLSPEC extern opal_pointer_array_t *orte_node_topologies;
|
2012-04-06 18:23:13 +04:00
|
|
|
ORTE_DECLSPEC extern opal_pointer_array_t *orte_local_children;
|
2013-11-14 21:01:43 +04:00
|
|
|
ORTE_DECLSPEC extern orte_vpid_t orte_total_procs;
|
2008-02-28 04:57:57 +03:00
|
|
|
|
2009-01-30 21:50:10 +03:00
|
|
|
/* whether or not to forward SIGTSTP and SIGCONT signals */
|
|
|
|
ORTE_DECLSPEC extern bool orte_forward_job_control;
|
2009-01-08 17:25:56 +03:00
|
|
|
|
2009-01-31 01:47:30 +03:00
|
|
|
/* IOF controls */
|
|
|
|
ORTE_DECLSPEC extern bool orte_tag_output;
|
|
|
|
ORTE_DECLSPEC extern bool orte_timestamp_output;
|
|
|
|
ORTE_DECLSPEC extern char *orte_output_filename;
|
|
|
|
/* generate new xterm windows to display output from specified ranks */
|
|
|
|
ORTE_DECLSPEC extern char *orte_xterm;
|
2009-01-07 17:58:38 +03:00
|
|
|
|
2009-06-03 03:52:02 +04:00
|
|
|
/* whether or not to report launch progress */
|
|
|
|
ORTE_DECLSPEC extern bool orte_report_launch_progress;
|
|
|
|
|
2009-08-11 06:51:27 +04:00
|
|
|
/* allocation specification */
|
2009-08-13 20:08:43 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_default_hostfile;
|
2012-02-15 08:16:05 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_default_hostfile_given;
|
2009-08-13 20:08:43 +04:00
|
|
|
ORTE_DECLSPEC extern char *orte_rankfile;
|
2011-07-07 22:54:30 +04:00
|
|
|
ORTE_DECLSPEC extern int orte_num_allocated_nodes;
|
|
|
|
ORTE_DECLSPEC extern char *orte_node_regex;
|
2009-08-11 06:51:27 +04:00
|
|
|
|
2014-06-01 08:28:17 +04:00
|
|
|
/* PMI version control */
|
|
|
|
ORTE_DECLSPEC extern int orted_pmi_version;
|
|
|
|
|
2009-09-09 09:28:45 +04:00
|
|
|
/* tool communication controls */
|
|
|
|
ORTE_DECLSPEC extern bool orte_report_events;
|
|
|
|
ORTE_DECLSPEC extern char *orte_report_events_uri;
|
|
|
|
|
2010-04-28 08:06:57 +04:00
|
|
|
/* process recovery */
|
|
|
|
ORTE_DECLSPEC extern bool orte_enable_recovery;
|
2011-02-14 23:49:12 +03:00
|
|
|
ORTE_DECLSPEC extern int32_t orte_max_restarts;
|
2012-04-06 18:23:13 +04:00
|
|
|
/* barrier control */
|
|
|
|
ORTE_DECLSPEC extern bool orte_do_not_barrier;
|
2010-04-28 08:06:57 +04:00
|
|
|
|
2010-05-12 22:11:58 +04:00
|
|
|
/* exit status reporting */
|
|
|
|
ORTE_DECLSPEC extern bool orte_report_child_jobs_separately;
|
|
|
|
ORTE_DECLSPEC extern struct timeval orte_child_time_to_exit;
|
2011-04-14 19:04:21 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_abort_non_zero_exit;
|
2011-03-13 03:46:42 +03:00
|
|
|
|
2011-06-30 07:12:38 +04:00
|
|
|
/* length of stat history to keep */
|
|
|
|
ORTE_DECLSPEC extern int orte_stat_history_size;
|
|
|
|
|
2011-12-07 01:31:22 +04:00
|
|
|
/* envars to forward */
|
2013-12-20 18:47:35 +04:00
|
|
|
ORTE_DECLSPEC extern char **orte_forwarded_envars;
|
2011-12-07 01:31:22 +04:00
|
|
|
|
2012-05-03 01:00:22 +04:00
|
|
|
/* map-reduce mode */
|
|
|
|
ORTE_DECLSPEC extern bool orte_map_reduce;
|
2012-11-10 18:09:12 +04:00
|
|
|
ORTE_DECLSPEC extern bool orte_staged_execution;
|
2012-05-03 01:00:22 +04:00
|
|
|
|
2012-04-27 18:39:34 +04:00
|
|
|
/* map stddiag output to stderr so it isn't forwarded to mpirun */
|
|
|
|
ORTE_DECLSPEC extern bool orte_map_stddiag_to_stderr;
|
|
|
|
|
2012-05-27 20:48:19 +04:00
|
|
|
/* maximum size of virtual machine - used to subdivide allocation */
|
|
|
|
ORTE_DECLSPEC extern int orte_max_vm_size;
|
|
|
|
|
2013-03-28 01:09:41 +04:00
|
|
|
/* user debugger */
|
|
|
|
ORTE_DECLSPEC extern char *orte_base_user_debugger;
|
|
|
|
|
2014-01-31 03:50:14 +04:00
|
|
|
/* binding directives for daemons to restrict them
|
|
|
|
* to certain cores
|
|
|
|
*/
|
|
|
|
ORTE_DECLSPEC extern char *orte_daemon_cores;
|
|
|
|
|
Per the PMIx RFC:
WHAT: Merge the PMIx branch into the devel repo, creating a new
OPAL “lmix” framework to abstract PMI support for all RTEs.
Replace the ORTE daemon-level collectives with a new PMIx
server and update the ORTE grpcomm framework to support
server-to-server collectives
WHY: We’ve had problems dealing with variations in PMI implementations,
and need to extend the existing PMI definitions to meet exascale
requirements.
WHEN: Mon, Aug 25
WHERE: https://github.com/rhc54/ompi-svn-mirror.git
Several community members have been working on a refactoring of the current PMI support within OMPI. Although the APIs are common, Slurm and Cray implement a different range of capabilities, and package them differently. For example, Cray provides an integrated PMI-1/2 library, while Slurm separates the two and requires the user to specify the one to be used at runtime. In addition, several bugs in the Slurm implementations have caused problems requiring extra coding.
All this has led to a slew of #if’s in the PMI code and bugs when the corner-case logic for one implementation accidentally traps the other. Extending this support to other implementations would have increased this complexity to an unacceptable level.
Accordingly, we have:
* created a new OPAL “pmix” framework to abstract the PMI support, with separate components for Cray, Slurm PMI-1, and Slurm PMI-2 implementations.
* Replaced the current ORTE grpcomm daemon-based collective operation with an integrated PMIx server, and updated the grpcomm APIs to provide more flexible, multi-algorithm support for collective operations. At this time, only the xcast and allgather operations are supported.
* Replaced the current global collective id with a signature based on the names of the participating procs. The allows an unlimited number of collectives to be executed by any group of processes, subject to the requirement that only one collective can be active at a time for a unique combination of procs. Note that a proc can be involved in any number of simultaneous collectives - it is the specific combination of procs that is subject to the constraint
* removed the prior OMPI/OPAL modex code
* added new macros for executing modex send/recv to simplify use of the new APIs. The send macros allow the caller to specify whether or not the BTL supports async modex operations - if so, then the non-blocking “fence” operation is used, if the active PMIx component supports it. Otherwise, the default is a full blocking modex exchange as we currently perform.
* retained the current flag that directs us to use a blocking fence operation, but only to retrieve data upon demand
This commit was SVN r32570.
2014-08-21 22:56:47 +04:00
|
|
|
/* cutoff for collective modex */
|
2014-10-09 08:15:31 +04:00
|
|
|
ORTE_DECLSPEC extern uint32_t orte_direct_modex_cutoff;
|
Per the PMIx RFC:
WHAT: Merge the PMIx branch into the devel repo, creating a new
OPAL “lmix” framework to abstract PMI support for all RTEs.
Replace the ORTE daemon-level collectives with a new PMIx
server and update the ORTE grpcomm framework to support
server-to-server collectives
WHY: We’ve had problems dealing with variations in PMI implementations,
and need to extend the existing PMI definitions to meet exascale
requirements.
WHEN: Mon, Aug 25
WHERE: https://github.com/rhc54/ompi-svn-mirror.git
Several community members have been working on a refactoring of the current PMI support within OMPI. Although the APIs are common, Slurm and Cray implement a different range of capabilities, and package them differently. For example, Cray provides an integrated PMI-1/2 library, while Slurm separates the two and requires the user to specify the one to be used at runtime. In addition, several bugs in the Slurm implementations have caused problems requiring extra coding.
All this has led to a slew of #if’s in the PMI code and bugs when the corner-case logic for one implementation accidentally traps the other. Extending this support to other implementations would have increased this complexity to an unacceptable level.
Accordingly, we have:
* created a new OPAL “pmix” framework to abstract the PMI support, with separate components for Cray, Slurm PMI-1, and Slurm PMI-2 implementations.
* Replaced the current ORTE grpcomm daemon-based collective operation with an integrated PMIx server, and updated the grpcomm APIs to provide more flexible, multi-algorithm support for collective operations. At this time, only the xcast and allgather operations are supported.
* Replaced the current global collective id with a signature based on the names of the participating procs. The allows an unlimited number of collectives to be executed by any group of processes, subject to the requirement that only one collective can be active at a time for a unique combination of procs. Note that a proc can be involved in any number of simultaneous collectives - it is the specific combination of procs that is subject to the constraint
* removed the prior OMPI/OPAL modex code
* added new macros for executing modex send/recv to simplify use of the new APIs. The send macros allow the caller to specify whether or not the BTL supports async modex operations - if so, then the non-blocking “fence” operation is used, if the active PMIx component supports it. Otherwise, the default is a full blocking modex exchange as we currently perform.
* retained the current flag that directs us to use a blocking fence operation, but only to retrieve data upon demand
This commit was SVN r32570.
2014-08-21 22:56:47 +04:00
|
|
|
|
2008-02-28 04:57:57 +03:00
|
|
|
END_C_DECLS
|
|
|
|
|
|
|
|
#endif /* ORTE_RUNTIME_ORTE_GLOBALS_H */
|