2005-10-07 22:24:52 +00:00
|
|
|
/*
|
2005-11-05 19:57:48 +00:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2005-10-07 22:24:52 +00:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2012-08-03 16:30:05 +00:00
|
|
|
* Copyright (c) 2011-2012 Los Alamos National Security, LLC. All rights
|
2012-04-06 14:23:13 +00:00
|
|
|
* reserved.
|
2005-10-07 22:24:52 +00:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "orte_config.h"
|
2009-03-13 02:10:32 +00:00
|
|
|
|
2009-07-16 18:27:33 +00:00
|
|
|
#ifdef HAVE_STRING_H
|
2009-03-13 02:10:32 +00:00
|
|
|
#include <string.h>
|
|
|
|
#endif
|
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
#include "orte/constants.h"
|
|
|
|
#include "orte/types.h"
|
2005-10-07 22:24:52 +00:00
|
|
|
|
|
|
|
#include "opal/mca/mca.h"
|
|
|
|
#include "opal/mca/base/base.h"
|
2006-10-17 16:06:17 +00:00
|
|
|
#include "opal/class/opal_list.h"
|
2009-02-14 02:26:12 +00:00
|
|
|
#include "opal/util/output.h"
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 03:40:11 +00:00
|
|
|
#include "opal/dss/dss.h"
|
2006-10-17 16:06:17 +00:00
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
#include "orte/util/show_help.h"
|
2005-10-07 22:24:52 +00:00
|
|
|
#include "orte/mca/errmgr/errmgr.h"
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 03:40:11 +00:00
|
|
|
#include "orte/mca/rmaps/base/base.h"
|
2008-02-28 01:57:57 +00:00
|
|
|
#include "orte/util/name_fns.h"
|
|
|
|
#include "orte/runtime/orte_globals.h"
|
2008-08-05 15:09:29 +00:00
|
|
|
#include "orte/runtime/orte_wait.h"
|
2008-02-28 01:57:57 +00:00
|
|
|
#include "orte/util/hostfile/hostfile.h"
|
|
|
|
#include "orte/util/dash_host/dash_host.h"
|
2008-03-23 23:10:15 +00:00
|
|
|
#include "orte/util/proc_info.h"
|
2009-09-09 17:47:58 +00:00
|
|
|
#include "orte/util/comm/comm.h"
|
2012-04-06 14:23:13 +00:00
|
|
|
#include "orte/mca/state/state.h"
|
2010-07-17 21:03:27 +00:00
|
|
|
#include "orte/runtime/orte_quit.h"
|
2005-10-07 22:24:52 +00:00
|
|
|
|
2006-09-14 21:29:51 +00:00
|
|
|
#include "orte/mca/ras/base/ras_private.h"
|
2005-10-07 22:24:52 +00:00
|
|
|
|
2009-07-14 14:34:11 +00:00
|
|
|
/* static function to display allocation */
|
|
|
|
static void display_alloc(void)
|
|
|
|
{
|
|
|
|
char *tmp=NULL, *tmp2, *tmp3, *pfx=NULL;
|
|
|
|
int i;
|
|
|
|
orte_node_t *alloc;
|
|
|
|
|
|
|
|
if (orte_xml_output) {
|
|
|
|
asprintf(&tmp, "<allocation>\n");
|
|
|
|
pfx = "\t";
|
|
|
|
} else {
|
|
|
|
asprintf(&tmp, "\n====================== ALLOCATED NODES ======================\n");
|
|
|
|
}
|
|
|
|
for (i=0; i < orte_node_pool->size; i++) {
|
|
|
|
if (NULL == (alloc = (orte_node_t*)opal_pointer_array_get_item(orte_node_pool, i))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
opal_dss.print(&tmp2, pfx, alloc, ORTE_NODE);
|
|
|
|
if (NULL == tmp) {
|
|
|
|
tmp = tmp2;
|
|
|
|
} else {
|
|
|
|
asprintf(&tmp3, "%s%s", tmp, tmp2);
|
|
|
|
free(tmp);
|
|
|
|
free(tmp2);
|
|
|
|
tmp = tmp3;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (orte_xml_output) {
|
2009-09-02 18:03:10 +00:00
|
|
|
fprintf(orte_xml_fp, "%s</allocation>\n", tmp);
|
|
|
|
fflush(orte_xml_fp);
|
2009-07-14 14:34:11 +00:00
|
|
|
} else {
|
|
|
|
opal_output(orte_clean_output, "%s\n\n=================================================================\n", tmp);
|
|
|
|
}
|
|
|
|
free(tmp);
|
|
|
|
}
|
|
|
|
|
2005-10-07 22:24:52 +00:00
|
|
|
/*
|
|
|
|
* Function for selecting one component from all those that are
|
|
|
|
* available.
|
|
|
|
*/
|
2012-04-06 14:23:13 +00:00
|
|
|
void orte_ras_base_allocate(int fd, short args, void *cbdata)
|
2005-10-07 22:24:52 +00:00
|
|
|
{
|
2008-02-28 01:57:57 +00:00
|
|
|
int rc;
|
2012-04-06 14:23:13 +00:00
|
|
|
orte_job_t *jdata;
|
2006-10-19 23:33:51 +00:00
|
|
|
opal_list_t nodes;
|
2009-07-14 14:34:11 +00:00
|
|
|
orte_node_t *node;
|
2008-02-28 01:57:57 +00:00
|
|
|
orte_std_cntr_t i;
|
2009-07-14 14:34:11 +00:00
|
|
|
orte_app_context_t *app;
|
2012-04-06 14:23:13 +00:00
|
|
|
orte_state_caddy_t *caddy = (orte_state_caddy_t*)cbdata;
|
2008-02-28 01:57:57 +00:00
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-02-28 01:57:57 +00:00
|
|
|
"%s ras:base:allocate",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2008-02-28 01:57:57 +00:00
|
|
|
|
2012-04-06 14:23:13 +00:00
|
|
|
/* convenience */
|
|
|
|
jdata = caddy->jdata;
|
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
/* if we already did this, don't do it again - the pool of
|
|
|
|
* global resources is set.
|
|
|
|
*/
|
|
|
|
if (orte_ras_base.allocation_read) {
|
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-02-28 01:57:57 +00:00
|
|
|
"%s ras:base:allocate allocation already read",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2012-04-06 14:23:13 +00:00
|
|
|
goto next_state;
|
2006-10-19 23:33:51 +00:00
|
|
|
}
|
2012-04-06 14:23:13 +00:00
|
|
|
orte_ras_base.allocation_read = true;
|
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
/* Otherwise, we have to create
|
|
|
|
* the initial set of resources that will delineate all
|
|
|
|
* further operations serviced by this HNP. This list will
|
|
|
|
* contain ALL nodes that can be used by any subsequent job.
|
|
|
|
*
|
|
|
|
* In other words, if a node isn't found in this step, then
|
|
|
|
* no job launched by this HNP will be able to utilize it.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* construct a list to hold the results */
|
|
|
|
OBJ_CONSTRUCT(&nodes, opal_list_t);
|
|
|
|
|
|
|
|
/* if a component was selected, then we know we are in a managed
|
|
|
|
* environment. - the active module will return a list of what it found
|
|
|
|
*/
|
|
|
|
if (NULL != orte_ras_base.active_module) {
|
|
|
|
/* read the allocation */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base.active_module->allocate(&nodes))) {
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 03:40:11 +00:00
|
|
|
if (ORTE_ERR_SYSTEM_WILL_BOOTSTRAP == rc) {
|
2009-09-30 23:30:24 +00:00
|
|
|
/* this module indicates that nodes will be discovered
|
2009-10-14 17:43:40 +00:00
|
|
|
* on a bootstrap basis, so all we do here is add our
|
|
|
|
* own node to the list
|
2009-09-30 23:30:24 +00:00
|
|
|
*/
|
2009-10-14 17:43:40 +00:00
|
|
|
goto addlocal;
|
2009-09-30 23:30:24 +00:00
|
|
|
}
|
2008-02-28 01:57:57 +00:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2006-10-19 23:33:51 +00:00
|
|
|
}
|
2008-02-28 01:57:57 +00:00
|
|
|
}
|
|
|
|
/* If something came back, save it and we are done */
|
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
|
|
|
/* store the results in the global resource pool - this removes the
|
2012-04-06 14:23:13 +00:00
|
|
|
* list items
|
|
|
|
*/
|
2008-02-28 01:57:57 +00:00
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2006-10-19 23:33:51 +00:00
|
|
|
}
|
2008-02-28 01:57:57 +00:00
|
|
|
OBJ_DESTRUCT(&nodes);
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 03:40:11 +00:00
|
|
|
/* default to no-oversubscribe-allowed for managed systems */
|
|
|
|
if (!(ORTE_MAPPING_SUBSCRIBE_GIVEN & ORTE_GET_MAPPING_DIRECTIVE(orte_rmaps_base.mapping))) {
|
|
|
|
ORTE_SET_MAPPING_DIRECTIVE(orte_rmaps_base.mapping, ORTE_MAPPING_NO_OVERSUBSCRIBE);
|
|
|
|
}
|
2008-04-21 20:28:59 +00:00
|
|
|
goto DISPLAY;
|
2008-08-04 14:25:19 +00:00
|
|
|
} else if (orte_allocation_required) {
|
|
|
|
/* if nothing was found, and an allocation is
|
|
|
|
* required, then error out
|
|
|
|
*/
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
|
|
|
orte_show_help("help-ras-base.txt", "ras-base:no-allocation", true);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2006-10-19 23:33:51 +00:00
|
|
|
}
|
2006-10-31 22:16:51 +00:00
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
|
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-02-28 01:57:57 +00:00
|
|
|
"%s ras:base:allocate nothing found in module - proceeding to hostfile",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2008-02-28 01:57:57 +00:00
|
|
|
|
|
|
|
/* nothing was found, or no active module was alive. Our next
|
|
|
|
* option is to look for a hostfile and assign our global
|
2008-03-05 04:54:57 +00:00
|
|
|
* pool from there. First, we check for a default hostfile
|
2008-08-19 15:17:40 +00:00
|
|
|
* as set by an mca param.
|
|
|
|
*
|
|
|
|
* Note that any relative node syntax found in the hostfile will
|
|
|
|
* generate an error in this scenario, so only non-relative syntax
|
|
|
|
* can be present
|
2008-03-05 04:54:57 +00:00
|
|
|
*/
|
|
|
|
if (NULL != orte_default_hostfile) {
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-03-05 04:54:57 +00:00
|
|
|
"%s ras:base:allocate parsing default hostfile %s",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
2008-03-05 04:54:57 +00:00
|
|
|
orte_default_hostfile));
|
|
|
|
|
|
|
|
/* a default hostfile was provided - parse it */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_hostfile_nodes(&nodes,
|
|
|
|
orte_default_hostfile))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2008-03-05 04:54:57 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* if something was found in the default hostfile, we use that as our global
|
|
|
|
* pool - set it and we are done
|
|
|
|
*/
|
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2008-03-05 04:54:57 +00:00
|
|
|
}
|
|
|
|
/* cleanup */
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2008-04-21 20:28:59 +00:00
|
|
|
goto DISPLAY;
|
2008-03-05 04:54:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Individual hostfile names, if given, are included
|
2008-02-28 01:57:57 +00:00
|
|
|
* in the app_contexts for this job. We therefore need to
|
|
|
|
* retrieve the app_contexts for the job, and then cycle
|
|
|
|
* through them to see if anything is there. The parser will
|
|
|
|
* add the nodes found in each hostfile to our list - i.e.,
|
|
|
|
* the resulting list contains the UNION of all nodes specified
|
|
|
|
* in hostfiles from across all app_contexts
|
2008-08-19 15:17:40 +00:00
|
|
|
*
|
|
|
|
* Note that any relative node syntax found in the hostfiles will
|
|
|
|
* generate an error in this scenario, so only non-relative syntax
|
|
|
|
* can be present
|
2008-02-28 01:57:57 +00:00
|
|
|
*/
|
|
|
|
|
2009-07-14 14:34:11 +00:00
|
|
|
for (i=0; i < jdata->apps->size; i++) {
|
|
|
|
if (NULL == (app = (orte_app_context_t*)opal_pointer_array_get_item(jdata->apps, i))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (NULL != app->hostfile) {
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-02-28 01:57:57 +00:00
|
|
|
"%s ras:base:allocate checking hostfile %s",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
2009-07-14 14:34:11 +00:00
|
|
|
app->hostfile));
|
2008-02-28 01:57:57 +00:00
|
|
|
|
|
|
|
/* hostfile was specified - parse it and add it to the list */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_hostfile_nodes(&nodes,
|
2012-04-06 14:23:13 +00:00
|
|
|
app->hostfile))) {
|
2008-02-28 01:57:57 +00:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
/* set an error event */
|
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2005-10-07 22:24:52 +00:00
|
|
|
}
|
|
|
|
}
|
2006-11-22 13:30:21 +00:00
|
|
|
}
|
2008-02-28 01:57:57 +00:00
|
|
|
|
|
|
|
/* if something was found in the hostfile(s), we use that as our global
|
|
|
|
* pool - set it and we are done
|
|
|
|
*/
|
2006-11-22 13:30:21 +00:00
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
2008-02-28 01:57:57 +00:00
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2008-02-28 01:57:57 +00:00
|
|
|
}
|
|
|
|
/* cleanup */
|
2006-11-22 13:30:21 +00:00
|
|
|
OBJ_DESTRUCT(&nodes);
|
2008-04-21 20:28:59 +00:00
|
|
|
goto DISPLAY;
|
2006-10-31 22:16:51 +00:00
|
|
|
}
|
2008-02-28 01:57:57 +00:00
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2008-02-28 01:57:57 +00:00
|
|
|
"%s ras:base:allocate nothing found in hostfiles - checking dash-host options",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2008-02-28 01:57:57 +00:00
|
|
|
|
|
|
|
/* Our next option is to look for hosts provided via the -host
|
|
|
|
* command line option. If they are present, we declare this
|
|
|
|
* to represent not just a mapping, but to define the global
|
|
|
|
* resource pool in the absence of any other info.
|
|
|
|
*
|
|
|
|
* -host lists are provided as part of the app_contexts for
|
|
|
|
* this job. We therefore need to retrieve the app_contexts
|
|
|
|
* for the job, and then cycle through them to see if anything
|
|
|
|
* is there. The parser will add the -host nodes to our list - i.e.,
|
|
|
|
* the resulting list contains the UNION of all nodes specified
|
|
|
|
* by -host across all app_contexts
|
2008-08-19 15:17:40 +00:00
|
|
|
*
|
|
|
|
* Note that any relative node syntax found in the -host lists will
|
|
|
|
* generate an error in this scenario, so only non-relative syntax
|
|
|
|
* can be present
|
2008-02-28 01:57:57 +00:00
|
|
|
*/
|
2009-07-14 14:34:11 +00:00
|
|
|
for (i=0; i < jdata->apps->size; i++) {
|
|
|
|
if (NULL == (app = (orte_app_context_t*)opal_pointer_array_get_item(jdata->apps, i))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (NULL != app->dash_host) {
|
2008-02-28 01:57:57 +00:00
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_dash_host_nodes(&nodes,
|
2012-04-06 14:23:13 +00:00
|
|
|
app->dash_host))) {
|
2008-02-28 01:57:57 +00:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2006-10-31 22:16:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2005-10-07 22:24:52 +00:00
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
/* if something was found in -host, we use that as our global
|
|
|
|
* pool - set it and we are done
|
|
|
|
*/
|
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2008-02-28 01:57:57 +00:00
|
|
|
}
|
|
|
|
/* cleanup */
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2008-04-21 20:28:59 +00:00
|
|
|
goto DISPLAY;
|
2006-10-19 23:33:51 +00:00
|
|
|
}
|
2012-04-06 14:23:13 +00:00
|
|
|
|
2009-08-13 16:08:43 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
|
|
|
"%s ras:base:allocate nothing found in dash-host - checking for rankfile",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
|
|
|
|
|
|
|
/* Our next option is to look for a rankfile - if one was provided, we
|
|
|
|
* will use its nodes to create a default allocation pool
|
|
|
|
*/
|
|
|
|
if (NULL != orte_rankfile) {
|
|
|
|
/* check the rankfile for node information */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_hostfile_nodes(&nodes,
|
|
|
|
orte_rankfile))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return ;
|
2009-08-13 16:08:43 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* if something was found in rankfile, we use that as our global
|
|
|
|
* pool - set it and we are done
|
|
|
|
*/
|
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
/* rankfile is considered equivalent to an RM allocation */
|
|
|
|
if (!(ORTE_MAPPING_SUBSCRIBE_GIVEN & ORTE_GET_MAPPING_DIRECTIVE(orte_rmaps_base.mapping))) {
|
|
|
|
ORTE_SET_MAPPING_DIRECTIVE(orte_rmaps_base.mapping, ORTE_MAPPING_NO_OVERSUBSCRIBE);
|
2009-08-13 16:08:43 +00:00
|
|
|
}
|
|
|
|
/* cleanup */
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
|
|
|
goto DISPLAY;
|
|
|
|
}
|
2006-10-17 16:06:17 +00:00
|
|
|
|
|
|
|
|
2008-06-09 14:53:58 +00:00
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
2009-08-13 16:08:43 +00:00
|
|
|
"%s ras:base:allocate nothing found in rankfile - inserting current node",
|
2009-03-05 21:50:47 +00:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2006-10-17 16:06:17 +00:00
|
|
|
|
2012-04-06 14:23:13 +00:00
|
|
|
addlocal:
|
2008-02-28 01:57:57 +00:00
|
|
|
/* if nothing was found by any of the above methods, then we have no
|
|
|
|
* earthly idea what to do - so just add the local host
|
|
|
|
*/
|
|
|
|
node = OBJ_NEW(orte_node_t);
|
|
|
|
if (NULL == node) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2006-10-17 16:06:17 +00:00
|
|
|
}
|
2009-03-05 21:56:03 +00:00
|
|
|
/* use the same name we got in orte_process_info so we avoid confusion in
|
2008-02-28 01:57:57 +00:00
|
|
|
* the session directories
|
|
|
|
*/
|
2009-03-05 21:56:03 +00:00
|
|
|
node->name = strdup(orte_process_info.nodename);
|
2008-02-28 01:57:57 +00:00
|
|
|
node->state = ORTE_NODE_STATE_UP;
|
2012-08-31 21:28:49 +00:00
|
|
|
node->slots_alloc = 1;
|
2008-02-28 01:57:57 +00:00
|
|
|
node->slots_inuse = 0;
|
|
|
|
node->slots_max = 0;
|
|
|
|
node->slots = 1;
|
|
|
|
opal_list_append(&nodes, &node->super);
|
2006-10-17 16:06:17 +00:00
|
|
|
|
2008-02-28 01:57:57 +00:00
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
|
|
|
return;
|
2008-02-28 01:57:57 +00:00
|
|
|
}
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2008-04-21 20:28:59 +00:00
|
|
|
|
2012-04-06 14:23:13 +00:00
|
|
|
DISPLAY:
|
|
|
|
/* shall we display the results? */
|
2012-08-02 04:57:13 +00:00
|
|
|
if (4 < opal_output_get_verbosity(orte_ras_base.ras_output) || orte_ras_base.display_alloc) {
|
2012-04-06 14:23:13 +00:00
|
|
|
display_alloc();
|
|
|
|
}
|
|
|
|
|
|
|
|
next_state:
|
2009-09-09 17:47:58 +00:00
|
|
|
/* are we to report this event? */
|
|
|
|
if (orte_report_events) {
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_comm_report_event(ORTE_COMM_EVENT_ALLOCATE))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-06 14:23:13 +00:00
|
|
|
ORTE_TERMINATE(ORTE_ERROR_DEFAULT_EXIT_CODE);
|
|
|
|
OBJ_RELEASE(caddy);
|
2009-09-09 17:47:58 +00:00
|
|
|
}
|
|
|
|
}
|
2009-07-14 14:34:11 +00:00
|
|
|
|
2012-04-06 14:23:13 +00:00
|
|
|
/* set total slots alloc */
|
|
|
|
jdata->total_slots_alloc = orte_ras_base.total_slots_alloc;
|
|
|
|
|
|
|
|
/* set the job state to the next position */
|
2012-08-03 16:30:05 +00:00
|
|
|
ORTE_ACTIVATE_JOB_STATE(jdata, ORTE_JOB_STATE_ALLOCATION_COMPLETE);
|
2012-04-06 14:23:13 +00:00
|
|
|
|
|
|
|
/* cleanup */
|
|
|
|
OBJ_RELEASE(caddy);
|
2009-07-14 14:34:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int orte_ras_base_add_hosts(orte_job_t *jdata)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
opal_list_t nodes;
|
|
|
|
int i;
|
|
|
|
orte_app_context_t *app;
|
|
|
|
|
|
|
|
/* construct a list to hold the results */
|
|
|
|
OBJ_CONSTRUCT(&nodes, opal_list_t);
|
|
|
|
|
|
|
|
/* Individual add-hostfile names, if given, are included
|
|
|
|
* in the app_contexts for this job. We therefore need to
|
|
|
|
* retrieve the app_contexts for the job, and then cycle
|
|
|
|
* through them to see if anything is there. The parser will
|
|
|
|
* add the nodes found in each add-hostfile to our list - i.e.,
|
|
|
|
* the resulting list contains the UNION of all nodes specified
|
|
|
|
* in add-hostfiles from across all app_contexts
|
|
|
|
*
|
|
|
|
* Note that any relative node syntax found in the add-hostfiles will
|
|
|
|
* generate an error in this scenario, so only non-relative syntax
|
|
|
|
* can be present
|
|
|
|
*/
|
|
|
|
|
|
|
|
for (i=0; i < jdata->apps->size; i++) {
|
|
|
|
if (NULL == (app = (orte_app_context_t*)opal_pointer_array_get_item(jdata->apps, i))) {
|
|
|
|
continue;
|
2008-06-04 20:53:12 +00:00
|
|
|
}
|
2009-07-14 14:34:11 +00:00
|
|
|
if (NULL != app->add_hostfile) {
|
|
|
|
OPAL_OUTPUT_VERBOSE((5, orte_ras_base.ras_output,
|
|
|
|
"%s ras:base:add_hosts checking add-hostfile %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
|
|
|
app->add_hostfile));
|
|
|
|
|
|
|
|
/* hostfile was specified - parse it and add it to the list */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_hostfile_nodes(&nodes,
|
|
|
|
app->add_hostfile))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
|
|
|
return rc;
|
2008-04-20 02:25:45 +00:00
|
|
|
}
|
2009-07-14 14:34:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We next check for and add any add-host options. Note this is
|
|
|
|
* a -little- different than dash-host in that (a) we add these
|
|
|
|
* nodes to the global pool regardless of what may already be there,
|
|
|
|
* and (b) as a result, any job and/or app_context can access them.
|
|
|
|
*
|
|
|
|
* Note that any relative node syntax found in the add-host lists will
|
|
|
|
* generate an error in this scenario, so only non-relative syntax
|
|
|
|
* can be present
|
|
|
|
*/
|
|
|
|
for (i=0; i < jdata->apps->size; i++) {
|
|
|
|
if (NULL == (app = (orte_app_context_t*)opal_pointer_array_get_item(jdata->apps, i))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (NULL != app->add_host) {
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_add_dash_host_nodes(&nodes,
|
|
|
|
app->add_host))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
|
|
|
return rc;
|
2008-06-04 20:53:12 +00:00
|
|
|
}
|
|
|
|
}
|
2009-07-14 14:34:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* if something was found, we add that to our global pool */
|
|
|
|
if (!opal_list_is_empty(&nodes)) {
|
|
|
|
/* store the results in the global resource pool - this removes the
|
|
|
|
* list items
|
|
|
|
*/
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_ras_base_node_insert(&nodes, jdata))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2008-04-20 02:25:45 +00:00
|
|
|
}
|
2009-07-14 14:34:11 +00:00
|
|
|
/* cleanup */
|
|
|
|
OBJ_DESTRUCT(&nodes);
|
2008-04-20 02:25:45 +00:00
|
|
|
}
|
|
|
|
|
2009-07-14 14:34:11 +00:00
|
|
|
/* shall we display the results? */
|
2012-04-06 14:23:13 +00:00
|
|
|
if (0 < opal_output_get_verbosity(orte_ras_base.ras_output) || orte_ras_base.display_alloc) {
|
2009-07-14 14:34:11 +00:00
|
|
|
display_alloc();
|
|
|
|
}
|
|
|
|
|
|
|
|
return ORTE_SUCCESS;
|
2006-10-17 16:06:17 +00:00
|
|
|
}
|