2008-04-30 23:49:53 +04:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2011-06-24 00:38:02 +04:00
|
|
|
* Copyright (c) 2004-2011 The University of Tennessee and The University
|
2008-04-30 23:49:53 +04:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
Per the meeting on moving the BTLs to OPAL, move the ORTE database "db" framework to OPAL so the relocated BTLs can access it. Because the data is indexed by process, this requires that we define a new "opal_identifier_t" that corresponds to the orte_process_name_t struct. In order to support multiple run-times, this is defined in opal/mca/db/db_types.h as a uint64_t without identifying the meaning of any part of that data.
A few changes were required to support this move:
1. the PMI component used to identify rte-related data (e.g., host name, bind level) and package them as a unit to reduce the number of PMI keys. This code was moved up to the ORTE layer as the OPAL layer has no understanding of these concepts. In addition, the component locally stored data based on process jobid/vpid - this could no longer be supported (see below for the solution).
2. the hash component was updated to use the new opal_identifier_t instead of orte_process_name_t as its index for storing data in the hash tables. Previously, we did a hash on the vpid and stored the data in a 32-bit hash table. In the revised system, we don't see a separate "vpid" field - we only have a 64-bit opaque value. The orte_process_name_t hash turned out to do nothing useful, so we now store the data in a 64-bit hash table. Preliminary tests didn't show any identifiable change in behavior or performance, but we'll have to see if a move back to the 32-bit table is required at some later time.
3. the db framework was a "select one" system. However, since the PMI component could no longer use its internal storage system, the framework has now been changed to a "select many" mode of operation. This allows the hash component to handle all internal storage, while the PMI component only handles pushing/pulling things from the PMI system. This was something we had planned for some time - when fetching data, we first check internal storage to see if we already have it, and then automatically go to the global system to look for it if we don't. Accordingly, the framework was provided with a custom query function used during "select" that lets you seperately specify the "store" and "fetch" ordering.
4. the ORTE grpcomm and ess/pmi components, and the nidmap code, were updated to work with the new db framework and to specify internal/global storage options.
No changes were made to the MPI layer, except for modifying the ORTE component of the OMPI/rte framework to support the new db framework.
This commit was SVN r28112.
2013-02-26 21:50:04 +04:00
|
|
|
* Copyright (c) 2012-2013 Los Alamos National Security, LLC.
|
2012-05-03 01:00:22 +04:00
|
|
|
* All rights reserved.
|
2014-04-30 01:49:23 +04:00
|
|
|
* Copyright (c) 2013-2014 Intel, Inc. All rights reserved
|
2012-05-03 01:00:22 +04:00
|
|
|
*
|
2014-05-15 12:28:53 +04:00
|
|
|
* Copyright (c) 2014 Research Organization for Information Science
|
|
|
|
* and Technology (RIST). All rights reserved.
|
2008-04-30 23:49:53 +04:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
#include "orte_config.h"
|
|
|
|
#include "orte/types.h"
|
|
|
|
#include "orte/constants.h"
|
|
|
|
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <ctype.h>
|
2009-01-07 17:58:38 +03:00
|
|
|
#include <fcntl.h>
|
|
|
|
#ifdef HAVE_UNISTD_H
|
|
|
|
#include <unistd.h>
|
|
|
|
#endif
|
2009-05-16 08:15:55 +04:00
|
|
|
#ifdef HAVE_SYS_SOCKET_H
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_NETINET_IN_H
|
|
|
|
#include <netinet/in.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_ARPA_INET_H
|
|
|
|
#include <arpa/inet.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_NETDB_H
|
|
|
|
#include <netdb.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_IFADDRS_H
|
|
|
|
#include <ifaddrs.h>
|
|
|
|
#endif
|
2008-04-30 23:49:53 +04:00
|
|
|
|
|
|
|
#include "opal/dss/dss.h"
|
2009-01-07 17:58:38 +03:00
|
|
|
#include "opal/runtime/opal.h"
|
2009-02-06 18:28:32 +03:00
|
|
|
#include "opal/class/opal_pointer_array.h"
|
2014-04-30 01:49:23 +04:00
|
|
|
#include "opal/mca/dstore/dstore.h"
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#include "opal/mca/hwloc/base/base.h"
|
2012-11-10 18:09:12 +04:00
|
|
|
#include "opal/util/net.h"
|
2009-02-14 05:26:12 +03:00
|
|
|
#include "opal/util/output.h"
|
2009-05-16 08:15:55 +04:00
|
|
|
#include "opal/util/argv.h"
|
2012-06-27 18:53:55 +04:00
|
|
|
#include "opal/datatype/opal_datatype.h"
|
2008-04-30 23:49:53 +04:00
|
|
|
|
2012-10-30 03:11:30 +04:00
|
|
|
#include "orte/mca/dfs/dfs.h"
|
2008-04-30 23:49:53 +04:00
|
|
|
#include "orte/mca/errmgr/errmgr.h"
|
2010-03-23 23:47:41 +03:00
|
|
|
#include "orte/mca/odls/base/odls_private.h"
|
2008-06-09 18:53:58 +04:00
|
|
|
#include "orte/util/show_help.h"
|
2008-04-30 23:49:53 +04:00
|
|
|
#include "orte/util/proc_info.h"
|
|
|
|
#include "orte/util/name_fns.h"
|
2009-06-24 00:25:38 +04:00
|
|
|
#include "orte/util/regex.h"
|
2008-04-30 23:49:53 +04:00
|
|
|
#include "orte/runtime/orte_globals.h"
|
2009-05-16 08:15:55 +04:00
|
|
|
#include "orte/mca/rml/base/rml_contact.h"
|
2012-04-29 04:10:01 +04:00
|
|
|
#include "orte/mca/state/state.h"
|
2008-04-30 23:49:53 +04:00
|
|
|
|
|
|
|
#include "orte/util/nidmap.h"
|
|
|
|
|
2012-07-04 04:04:16 +04:00
|
|
|
static int orte_nidmap_verbose, orte_nidmap_output=-1;
|
|
|
|
|
2009-01-07 17:58:38 +03:00
|
|
|
int orte_util_nidmap_init(opal_buffer_t *buffer)
|
|
|
|
{
|
|
|
|
int32_t cnt;
|
|
|
|
int rc;
|
|
|
|
opal_byte_object_t *bo;
|
2013-03-28 01:09:41 +04:00
|
|
|
|
|
|
|
orte_nidmap_verbose = 0;
|
|
|
|
(void) mca_base_var_register ("orte", "orte", NULL, "nidmap_verbose",
|
|
|
|
"Verbosity of the nidmap subsystem",
|
|
|
|
MCA_BASE_VAR_TYPE_INT, NULL, 0,
|
|
|
|
MCA_BASE_VAR_FLAG_INTERNAL,
|
|
|
|
OPAL_INFO_LVL_9,
|
|
|
|
MCA_BASE_VAR_SCOPE_ALL_EQ,
|
|
|
|
&orte_nidmap_verbose);
|
2012-07-04 04:04:16 +04:00
|
|
|
if (0 < orte_nidmap_verbose) {
|
|
|
|
orte_nidmap_output = opal_output_open(NULL);
|
|
|
|
opal_output_set_verbosity(orte_nidmap_output, orte_nidmap_verbose);
|
|
|
|
}
|
|
|
|
|
2009-02-06 18:28:32 +03:00
|
|
|
/* it is okay if the buffer is empty */
|
2009-01-07 17:58:38 +03:00
|
|
|
if (NULL == buffer || 0 == buffer->bytes_used) {
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
{
|
|
|
|
hwloc_topology_t topo;
|
|
|
|
|
|
|
|
/* extract the topology */
|
|
|
|
cnt=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(buffer, &topo, &cnt, OPAL_HWLOC_TOPO))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
if (NULL == opal_hwloc_topology) {
|
|
|
|
opal_hwloc_topology = topo;
|
|
|
|
} else {
|
|
|
|
hwloc_topology_destroy(topo);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2009-01-07 17:58:38 +03:00
|
|
|
/* extract the byte object holding the daemonmap */
|
|
|
|
cnt=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(buffer, &bo, &cnt, OPAL_BYTE_OBJECT))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* unpack the node map */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_decode_nodemap(bo))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* the bytes in the object were free'd by the decode */
|
2014-05-15 12:28:53 +04:00
|
|
|
free(bo);
|
2009-01-07 17:58:38 +03:00
|
|
|
|
|
|
|
/* extract the byte object holding the process map */
|
|
|
|
cnt=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(buffer, &bo, &cnt, OPAL_BYTE_OBJECT))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* unpack the process map */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_decode_pidmap(bo))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* the bytes in the object were free'd by the decode */
|
2014-05-15 12:28:53 +04:00
|
|
|
free(bo);
|
2011-09-11 23:02:24 +04:00
|
|
|
|
2009-01-07 17:58:38 +03:00
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
void orte_util_nidmap_finalize(void)
|
|
|
|
{
|
2011-09-11 23:02:24 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
/* destroy the topology */
|
|
|
|
if (NULL != opal_hwloc_topology) {
|
2014-05-15 12:28:53 +04:00
|
|
|
hwloc_obj_t root;
|
|
|
|
root = hwloc_get_root_obj(opal_hwloc_topology);
|
|
|
|
if (NULL != root->userdata) {
|
|
|
|
OBJ_RELEASE(root->userdata);
|
|
|
|
}
|
2011-09-11 23:02:24 +04:00
|
|
|
hwloc_topology_destroy(opal_hwloc_topology);
|
2011-09-13 23:21:10 +04:00
|
|
|
opal_hwloc_topology = NULL;
|
2011-09-11 23:02:24 +04:00
|
|
|
}
|
|
|
|
#endif
|
2009-02-20 00:28:58 +03:00
|
|
|
}
|
|
|
|
|
2012-05-21 23:56:15 +04:00
|
|
|
#if ORTE_ENABLE_STATIC_PORTS
|
2009-05-16 08:15:55 +04:00
|
|
|
int orte_util_build_daemon_nidmap(char **nodes)
|
|
|
|
{
|
|
|
|
int i, num_nodes;
|
|
|
|
int rc;
|
|
|
|
struct hostent *h;
|
|
|
|
opal_buffer_t buf;
|
|
|
|
orte_process_name_t proc;
|
|
|
|
char *uri, *addr;
|
|
|
|
char *proc_name;
|
2014-04-30 01:49:23 +04:00
|
|
|
opal_value_t kv;
|
Per the meeting on moving the BTLs to OPAL, move the ORTE database "db" framework to OPAL so the relocated BTLs can access it. Because the data is indexed by process, this requires that we define a new "opal_identifier_t" that corresponds to the orte_process_name_t struct. In order to support multiple run-times, this is defined in opal/mca/db/db_types.h as a uint64_t without identifying the meaning of any part of that data.
A few changes were required to support this move:
1. the PMI component used to identify rte-related data (e.g., host name, bind level) and package them as a unit to reduce the number of PMI keys. This code was moved up to the ORTE layer as the OPAL layer has no understanding of these concepts. In addition, the component locally stored data based on process jobid/vpid - this could no longer be supported (see below for the solution).
2. the hash component was updated to use the new opal_identifier_t instead of orte_process_name_t as its index for storing data in the hash tables. Previously, we did a hash on the vpid and stored the data in a 32-bit hash table. In the revised system, we don't see a separate "vpid" field - we only have a 64-bit opaque value. The orte_process_name_t hash turned out to do nothing useful, so we now store the data in a 64-bit hash table. Preliminary tests didn't show any identifiable change in behavior or performance, but we'll have to see if a move back to the 32-bit table is required at some later time.
3. the db framework was a "select one" system. However, since the PMI component could no longer use its internal storage system, the framework has now been changed to a "select many" mode of operation. This allows the hash component to handle all internal storage, while the PMI component only handles pushing/pulling things from the PMI system. This was something we had planned for some time - when fetching data, we first check internal storage to see if we already have it, and then automatically go to the global system to look for it if we don't. Accordingly, the framework was provided with a custom query function used during "select" that lets you seperately specify the "store" and "fetch" ordering.
4. the ORTE grpcomm and ess/pmi components, and the nidmap code, were updated to work with the new db framework and to specify internal/global storage options.
No changes were made to the MPI layer, except for modifying the ORTE component of the OMPI/rte framework to support the new db framework.
This commit was SVN r28112.
2013-02-26 21:50:04 +04:00
|
|
|
|
2009-05-16 08:15:55 +04:00
|
|
|
num_nodes = opal_argv_count(nodes);
|
|
|
|
|
2012-07-04 04:04:16 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
2011-08-18 18:59:18 +04:00
|
|
|
"%s orte:util:build:daemon:nidmap found %d nodes",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), num_nodes));
|
2009-05-16 08:15:55 +04:00
|
|
|
|
|
|
|
if (0 == num_nodes) {
|
|
|
|
/* nothing to do */
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* install the entry for the HNP */
|
|
|
|
proc.jobid = ORTE_PROC_MY_NAME->jobid;
|
|
|
|
proc.vpid = 0;
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_DAEMON_VPID);
|
|
|
|
kv.data.uint32 = proc.vpid;
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc,
|
|
|
|
&kv))) {
|
2009-05-16 08:15:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2009-05-16 08:15:55 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
|
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_HOSTNAME);
|
|
|
|
kv.data.string = strdup("HNP");
|
|
|
|
kv.type = OPAL_STRING;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc,
|
|
|
|
&kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
|
2009-05-16 08:15:55 +04:00
|
|
|
/* the daemon vpids will be assigned in order,
|
|
|
|
* starting with vpid=1 for the first node in
|
|
|
|
* the list
|
|
|
|
*/
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
|
|
|
for (i=0; i < num_nodes; i++) {
|
2012-06-27 18:53:55 +04:00
|
|
|
/* define the vpid for this daemon */
|
|
|
|
proc.vpid = i+1;
|
|
|
|
/* store the hostname for the proc */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_HOSTNAME);
|
|
|
|
kv.data.string = strdup(nodes[i]);
|
|
|
|
kv.type = OPAL_STRING;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc,
|
|
|
|
&kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
|
2009-05-16 08:15:55 +04:00
|
|
|
/* the arch defaults to our arch so that non-hetero
|
|
|
|
* case will yield correct behavior
|
|
|
|
*/
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_ARCH);
|
|
|
|
kv.data.uint32 = opal_local_arch;
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc,
|
|
|
|
&kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
|
2009-05-16 08:15:55 +04:00
|
|
|
/* lookup the address of this node */
|
2012-06-27 18:53:55 +04:00
|
|
|
if (NULL == (h = gethostbyname(nodes[i]))) {
|
2009-05-16 08:15:55 +04:00
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_NOT_FOUND);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
|
|
|
}
|
|
|
|
addr = inet_ntoa(*(struct in_addr*)h->h_addr_list[0]);
|
|
|
|
|
|
|
|
/* since we are using static ports, all my fellow daemons will be on my
|
|
|
|
* port. Setup the contact info for each daemon in my hash tables. Note
|
|
|
|
* that this will -not- open a port to those daemons, but will only
|
|
|
|
* define the info necessary for opening such a port if/when I communicate
|
|
|
|
* to them
|
|
|
|
*/
|
2011-06-24 00:38:02 +04:00
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* construct the URI */
|
2009-05-16 08:15:55 +04:00
|
|
|
orte_util_convert_process_name_to_string(&proc_name, &proc);
|
|
|
|
asprintf(&uri, "%s;tcp://%s:%d", proc_name, addr, (int)orte_process_info.my_port);
|
2012-07-04 04:04:16 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
2011-08-18 18:59:18 +04:00
|
|
|
"%s orte:util:build:daemon:nidmap node %s daemon %d addr %s uri %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
2012-06-27 18:53:55 +04:00
|
|
|
nodes[i], i+1, addr, uri));
|
2009-05-16 08:15:55 +04:00
|
|
|
opal_dss.pack(&buf, &uri, 1, OPAL_STRING);
|
|
|
|
free(proc_name);
|
|
|
|
free(uri);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* load the hash tables */
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_rml_base_update_contact_info(&buf))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
}
|
|
|
|
OBJ_DESTRUCT(&buf);
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
2012-05-21 23:56:15 +04:00
|
|
|
#endif
|
2009-05-16 08:15:55 +04:00
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
int orte_util_encode_nodemap(opal_byte_object_t *boptr, bool update)
|
2008-04-30 23:49:53 +04:00
|
|
|
{
|
2012-06-27 18:53:55 +04:00
|
|
|
orte_node_t *node;
|
2012-08-29 01:20:17 +04:00
|
|
|
int32_t i;
|
2008-04-30 23:49:53 +04:00
|
|
|
int rc;
|
|
|
|
opal_buffer_t buf;
|
2012-06-27 18:53:55 +04:00
|
|
|
char *ptr, *nodename;
|
2012-08-29 01:20:17 +04:00
|
|
|
orte_job_t *daemons;
|
|
|
|
orte_proc_t *dmn;
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
uint8_t flag;
|
2012-08-29 01:20:17 +04:00
|
|
|
|
2013-08-19 03:40:32 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
|
|
|
"%s orte:util:encode_nidmap",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* if the daemon job has not been updated, then there is
|
|
|
|
* nothing to send
|
|
|
|
*/
|
|
|
|
daemons = orte_get_job_data_object(ORTE_PROC_MY_NAME->jobid);
|
|
|
|
if (update && !daemons->updated) {
|
|
|
|
boptr->bytes = NULL;
|
|
|
|
boptr->size = 0;
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
2009-06-24 06:47:45 +04:00
|
|
|
|
2008-05-28 22:38:47 +04:00
|
|
|
/* setup a buffer for tmp use */
|
2008-04-30 23:49:53 +04:00
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
2008-05-28 22:38:47 +04:00
|
|
|
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
/* send the number of nodes */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &daemons->num_procs, 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
/* flag if coprocessors were detected */
|
|
|
|
if (orte_coprocessors_detected) {
|
|
|
|
flag = 1;
|
|
|
|
} else {
|
|
|
|
flag = 0;
|
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &flag, 1, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* only send info on nodes that have daemons on them, and
|
|
|
|
* only regarding daemons that have changed - i.e., new
|
|
|
|
* daemons since the last time we sent the info - so we
|
|
|
|
* minimize the size of the nidmap message. The daemon
|
|
|
|
* will maintain a global picture of the overall nidmap
|
|
|
|
* as it receives updates, and pass that down to the procs
|
|
|
|
*/
|
|
|
|
for (i=0; i < daemons->procs->size; i++) {
|
|
|
|
if (NULL == (dmn = (orte_proc_t*)opal_pointer_array_get_item(daemons->procs, i))) {
|
2009-06-24 06:47:45 +04:00
|
|
|
continue;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
/* if we want an update nidmap and this daemon hasn't
|
|
|
|
* been updated, then skip it
|
|
|
|
*/
|
|
|
|
if (update && !dmn->updated) {
|
2009-06-27 02:07:25 +04:00
|
|
|
continue;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
/* if the daemon doesn't have a node, that's an error */
|
|
|
|
if (NULL == (node = dmn->node)) {
|
|
|
|
opal_output(0, "DAEMON %s HAS NO NODE", ORTE_NAME_PRINT(&dmn->name));
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_NOT_FOUND);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
2012-06-27 18:53:55 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &dmn->name.vpid, 1, ORTE_VPID))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
/* pack the name of the node */
|
|
|
|
if (!orte_keep_fqdn_hostnames) {
|
|
|
|
nodename = strdup(node->name);
|
|
|
|
/* if the nodename is an IP address, do not mess with it! */
|
|
|
|
if (!opal_net_isaddr(nodename)) {
|
|
|
|
/* not an IP address */
|
|
|
|
if (NULL != (ptr = strchr(nodename, '.'))) {
|
|
|
|
*ptr = '\0';
|
2012-11-05 20:59:53 +04:00
|
|
|
}
|
2008-04-30 23:49:53 +04:00
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &nodename, 1, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
free(nodename);
|
|
|
|
} else {
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &node->name, 1, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* if requested, pack any aliases */
|
|
|
|
if (orte_retain_aliases) {
|
|
|
|
uint8_t naliases, ni;
|
|
|
|
naliases = opal_argv_count(node->alias);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &naliases, 1, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
for (ni=0; ni < naliases; ni++) {
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &node->alias[ni], 1, OPAL_STRING))) {
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* pack the oversubscribed flag */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &node->oversubscribed, 1, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
2008-05-28 22:38:47 +04:00
|
|
|
}
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
|
|
|
|
/* if coprocessors were detected, send the hostid for this node */
|
|
|
|
if (orte_coprocessors_detected) {
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &node->hostid, 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
2008-09-25 17:39:08 +04:00
|
|
|
}
|
2012-06-27 18:53:55 +04:00
|
|
|
|
2008-04-30 23:49:53 +04:00
|
|
|
/* transfer the payload to the byte object */
|
|
|
|
opal_dss.unload(&buf, (void**)&boptr->bytes, &boptr->size);
|
|
|
|
OBJ_DESTRUCT(&buf);
|
|
|
|
|
2013-08-19 03:40:32 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
|
|
|
"%s orte:util:build:daemon:nidmap packed %d bytes",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), boptr->size));
|
|
|
|
|
2008-04-30 23:49:53 +04:00
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* decode a nodemap for an application process */
|
2009-01-07 17:58:38 +03:00
|
|
|
int orte_util_decode_nodemap(opal_byte_object_t *bo)
|
2008-04-30 23:49:53 +04:00
|
|
|
{
|
2009-06-24 00:25:38 +04:00
|
|
|
int n;
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
orte_vpid_t num_daemons;
|
2012-06-27 18:53:55 +04:00
|
|
|
orte_process_name_t daemon;
|
2008-04-30 23:49:53 +04:00
|
|
|
opal_buffer_t buf;
|
2012-08-29 01:20:17 +04:00
|
|
|
int rc=ORTE_SUCCESS;
|
2012-06-27 18:53:55 +04:00
|
|
|
uint8_t oversub;
|
|
|
|
char *nodename;
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
orte_vpid_t hostid;
|
2014-04-30 01:49:23 +04:00
|
|
|
opal_value_t kv;
|
2008-04-30 23:49:53 +04:00
|
|
|
|
2012-07-04 04:04:16 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((1, orte_nidmap_output,
|
2008-04-30 23:49:53 +04:00
|
|
|
"%s decode:nidmap decoding nodemap",
|
2009-03-06 00:50:47 +03:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
2008-04-30 23:49:53 +04:00
|
|
|
|
2012-08-30 16:17:29 +04:00
|
|
|
/* should never happen, but... */
|
|
|
|
if (NULL == bo->bytes || 0 == bo->size) {
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2008-04-30 23:49:53 +04:00
|
|
|
/* xfer the byte object to a buffer for unpacking */
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
|
|
|
opal_dss.load(&buf, bo->bytes, bo->size);
|
|
|
|
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
/* unpack the number of daemons */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &num_daemons, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
/* see if coprocessors were detected */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &oversub, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
if (0 == oversub) {
|
|
|
|
orte_coprocessors_detected = false;
|
|
|
|
} else {
|
|
|
|
orte_coprocessors_detected = true;
|
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* set the daemon jobid */
|
|
|
|
daemon.jobid = ORTE_DAEMON_JOBID(ORTE_PROC_MY_NAME->jobid);
|
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
n=1;
|
|
|
|
while (OPAL_SUCCESS == (rc = opal_dss.unpack(&buf, &daemon.vpid, &n, ORTE_VPID))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
/* unpack and store the node's name */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &nodename, &n, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-09-27 04:37:49 +04:00
|
|
|
/* we only need the hostname for our own error messages, so mark it as internal */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_HOSTNAME);
|
|
|
|
kv.type = OPAL_STRING;
|
|
|
|
kv.data.string = strdup(nodename);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&daemon, &kv))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
/* now store a direct reference so we can quickly lookup the daemon from a hostname */
|
|
|
|
opal_output_verbose(2, orte_nidmap_output,
|
|
|
|
"%s storing nodename %s for daemon %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
|
|
|
nodename, ORTE_VPID_PRINT(daemon.vpid));
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(nodename);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = daemon.vpid;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)ORTE_NAME_WILDCARD, &kv))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
|
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
|
|
|
"%s orte:util:decode:nidmap daemon %s node %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
|
|
|
ORTE_VPID_PRINT(daemon.vpid), nodename));
|
|
|
|
|
|
|
|
/* if this is my daemon, then store the data for me too */
|
|
|
|
if (daemon.vpid == ORTE_PROC_MY_DAEMON->vpid) {
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_HOSTNAME);
|
|
|
|
kv.type = OPAL_STRING;
|
|
|
|
kv.data.string = strdup(nodename);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)ORTE_PROC_MY_NAME, &kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
return rc;
|
2010-12-01 15:51:39 +03:00
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-09-27 04:37:49 +04:00
|
|
|
/* we may need our daemon vpid to be shared with non-peers */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_DAEMON_VPID);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = daemon.vpid;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)ORTE_PROC_MY_NAME, &kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
return rc;
|
2009-06-24 06:47:45 +04:00
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* if requested, unpack any aliases */
|
|
|
|
if (orte_retain_aliases) {
|
|
|
|
char *alias;
|
|
|
|
uint8_t naliases, ni;
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &naliases, &n, OPAL_UINT8))) {
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
for (ni=0; ni < naliases; ni++) {
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &alias, &n, OPAL_STRING))) {
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
/* store a cross-reference to the daemon for this nodename */
|
|
|
|
opal_output_verbose(2, orte_nidmap_output,
|
|
|
|
"%s storing alias %s for daemon %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
|
|
|
alias, ORTE_VPID_PRINT(daemon.vpid));
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(alias);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = daemon.vpid;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)ORTE_NAME_WILDCARD, &kv))) {
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-08-22 07:40:26 +04:00
|
|
|
free(alias);
|
2012-11-16 08:04:29 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* unpack and discard the oversubscribed flag - procs don't need it */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &oversub, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
2011-06-24 00:38:02 +04:00
|
|
|
}
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
|
|
|
|
/* if coprocessors were detected, unpack the hostid for the node - this
|
|
|
|
* value is associate with this daemon, not with any application process
|
|
|
|
*/
|
|
|
|
if (orte_coprocessors_detected) {
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &hostid, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_HOSTID);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = hostid;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&daemon, &kv))) {
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* if this is my daemon, then store it as my hostid as well */
|
|
|
|
if (daemon.vpid == ORTE_PROC_MY_DAEMON->vpid) {
|
2014-04-30 01:49:23 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)ORTE_PROC_MY_NAME, &kv))) {
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* and record it */
|
|
|
|
orte_process_info.my_hostid = hostid;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
}
|
2008-05-28 22:38:47 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_ERR_UNPACK_READ_PAST_END_OF_BUFFER != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
} else {
|
|
|
|
rc = ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2010-05-04 06:40:09 +04:00
|
|
|
/* update num_daemons */
|
|
|
|
orte_process_info.num_daemons = num_daemons;
|
2008-05-28 22:38:47 +04:00
|
|
|
|
2008-04-30 23:49:53 +04:00
|
|
|
OBJ_DESTRUCT(&buf);
|
2012-08-29 01:20:17 +04:00
|
|
|
return rc;
|
2008-04-30 23:49:53 +04:00
|
|
|
}
|
|
|
|
|
2012-06-27 18:53:55 +04:00
|
|
|
/* decode a nodemap for a daemon */
|
2012-04-29 04:10:01 +04:00
|
|
|
int orte_util_decode_daemon_nodemap(opal_byte_object_t *bo)
|
|
|
|
{
|
|
|
|
int n;
|
2012-06-27 18:53:55 +04:00
|
|
|
orte_vpid_t vpid;
|
2012-04-29 04:10:01 +04:00
|
|
|
orte_node_t *node;
|
|
|
|
opal_buffer_t buf;
|
2012-08-29 01:20:17 +04:00
|
|
|
int rc=ORTE_SUCCESS;
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
uint8_t oversub;
|
2012-04-29 04:10:01 +04:00
|
|
|
char *name;
|
|
|
|
orte_job_t *daemons;
|
|
|
|
orte_proc_t *dptr;
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
orte_vpid_t num_daemons;
|
2012-04-29 04:10:01 +04:00
|
|
|
|
2012-07-04 04:04:16 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((1, orte_nidmap_output,
|
2012-04-29 04:10:01 +04:00
|
|
|
"%s decode:nidmap decoding daemon nodemap",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME)));
|
|
|
|
|
2012-08-30 16:17:29 +04:00
|
|
|
if (NULL == bo->bytes || 0 == bo->size) {
|
|
|
|
/* nothing to unpack */
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2012-04-29 04:10:01 +04:00
|
|
|
/* xfer the byte object to a buffer for unpacking */
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
|
|
|
opal_dss.load(&buf, bo->bytes, bo->size);
|
2014-05-15 12:28:53 +04:00
|
|
|
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
/* unpack the number of procs */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &num_daemons, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
/* see if coprocessors were detected */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &oversub, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
if (0 == oversub) {
|
|
|
|
orte_coprocessors_detected = false;
|
|
|
|
} else {
|
|
|
|
orte_coprocessors_detected = true;
|
|
|
|
}
|
|
|
|
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
/* transfer the data to the nodes */
|
2012-06-27 18:53:55 +04:00
|
|
|
daemons = orte_get_job_data_object(ORTE_PROC_MY_NAME->jobid);
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
daemons->num_procs = num_daemons;
|
2012-08-29 01:20:17 +04:00
|
|
|
n=1;
|
|
|
|
while (OPAL_SUCCESS == (rc = opal_dss.unpack(&buf, &vpid, &n, ORTE_VPID))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
/* unpack and store the node's name */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &name, &n, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* do we already have this node? */
|
|
|
|
if (NULL == (node = (orte_node_t*)opal_pointer_array_get_item(orte_node_pool, vpid))) {
|
|
|
|
node = OBJ_NEW(orte_node_t);
|
|
|
|
node->name = name;
|
|
|
|
opal_pointer_array_set_item(orte_node_pool, vpid, node);
|
|
|
|
} else {
|
|
|
|
free(name);
|
|
|
|
}
|
|
|
|
/* if requested, unpack any aliases */
|
|
|
|
if (orte_retain_aliases) {
|
|
|
|
char *alias;
|
|
|
|
uint8_t naliases, ni;
|
2012-11-16 08:04:29 +04:00
|
|
|
n=1;
|
2013-08-22 07:40:26 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &naliases, &n, OPAL_UINT8))) {
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
for (ni=0; ni < naliases; ni++) {
|
2012-11-16 08:04:29 +04:00
|
|
|
n=1;
|
2013-08-22 07:40:26 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &alias, &n, OPAL_STRING))) {
|
2012-11-16 08:04:29 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
opal_argv_append_nosize(&node->alias, alias);
|
|
|
|
free(alias);
|
2012-11-16 08:04:29 +04:00
|
|
|
}
|
|
|
|
}
|
2013-08-22 20:05:58 +04:00
|
|
|
/* unpack the oversubscribed flag */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &oversub, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
if (NULL == (dptr = (orte_proc_t*)opal_pointer_array_get_item(daemons->procs, vpid))) {
|
|
|
|
dptr = OBJ_NEW(orte_proc_t);
|
|
|
|
dptr->name.jobid = ORTE_PROC_MY_NAME->jobid;
|
|
|
|
dptr->name.vpid = vpid;
|
|
|
|
opal_pointer_array_set_item(daemons->procs, vpid, dptr);
|
|
|
|
}
|
|
|
|
if (NULL != node->daemon) {
|
|
|
|
OBJ_RELEASE(node->daemon);
|
|
|
|
}
|
|
|
|
OBJ_RETAIN(dptr);
|
|
|
|
node->daemon = dptr;
|
|
|
|
if (NULL != dptr->node) {
|
|
|
|
OBJ_RELEASE(dptr->node);
|
|
|
|
}
|
|
|
|
OBJ_RETAIN(node);
|
|
|
|
dptr->node = node;
|
|
|
|
if (0 == oversub) {
|
|
|
|
node->oversubscribed = false;
|
|
|
|
} else {
|
|
|
|
node->oversubscribed = true;
|
|
|
|
}
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
|
|
|
|
/* if coprocessors were detected, unpack the hostid */
|
|
|
|
if (orte_coprocessors_detected) {
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &node->hostid, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_ERR_UNPACK_READ_PAST_END_OF_BUFFER != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&buf);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
rc = ORTE_SUCCESS;
|
|
|
|
|
2012-05-27 20:48:19 +04:00
|
|
|
orte_process_info.num_procs = daemons->num_procs;
|
2012-04-29 04:10:01 +04:00
|
|
|
|
|
|
|
if (orte_process_info.max_procs < orte_process_info.num_procs) {
|
|
|
|
orte_process_info.max_procs = orte_process_info.num_procs;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* update num_daemons */
|
2012-05-27 20:48:19 +04:00
|
|
|
orte_process_info.num_daemons = daemons->num_procs;
|
2012-04-29 04:10:01 +04:00
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* update the global nidmap object for sending to
|
|
|
|
* application procs
|
|
|
|
*/
|
|
|
|
if (NULL != orte_nidmap.bytes) {
|
|
|
|
free(orte_nidmap.bytes);
|
2012-08-31 05:07:36 +04:00
|
|
|
orte_nidmap.bytes = NULL;
|
2012-08-29 01:20:17 +04:00
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_encode_nodemap(&orte_nidmap, false))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
}
|
|
|
|
|
2012-07-04 04:04:16 +04:00
|
|
|
if (0 < opal_output_get_verbosity(orte_nidmap_output)) {
|
2012-08-29 01:20:17 +04:00
|
|
|
int i;
|
|
|
|
for (i=0; i < orte_node_pool->size; i++) {
|
2012-04-29 04:10:01 +04:00
|
|
|
if (NULL == (node = (orte_node_t*)opal_pointer_array_get_item(orte_node_pool, i))) {
|
|
|
|
continue;
|
|
|
|
}
|
2012-07-04 04:04:16 +04:00
|
|
|
opal_output(0, "%s node[%d].name %s daemon %s",
|
2012-04-29 04:10:01 +04:00
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), i,
|
|
|
|
(NULL == node->name) ? "NULL" : node->name,
|
2012-05-27 20:48:19 +04:00
|
|
|
(NULL == node->daemon) ? "NONE" : ORTE_VPID_PRINT(node->daemon->name.vpid));
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
OBJ_DESTRUCT(&buf);
|
2012-08-29 01:20:17 +04:00
|
|
|
return rc;
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
int orte_util_encode_pidmap(opal_byte_object_t *boptr, bool update)
|
2008-04-30 23:49:53 +04:00
|
|
|
{
|
2009-05-11 07:24:49 +04:00
|
|
|
orte_proc_t *proc;
|
2008-04-30 23:49:53 +04:00
|
|
|
opal_buffer_t buf;
|
2012-08-29 01:20:17 +04:00
|
|
|
int i, j, rc = ORTE_SUCCESS;
|
|
|
|
orte_job_t *jdata;
|
2012-08-29 07:11:37 +04:00
|
|
|
bool include_all;
|
2012-10-30 03:11:30 +04:00
|
|
|
uint8_t flag;
|
2008-04-30 23:49:53 +04:00
|
|
|
|
|
|
|
/* setup the working buffer */
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
|
|
|
|
2012-08-29 07:11:37 +04:00
|
|
|
/* check the daemon job to see if it has changed - perhaps
|
|
|
|
* new daemons were added as the result of a comm_spawn
|
|
|
|
*/
|
|
|
|
jdata = orte_get_job_data_object(ORTE_PROC_MY_NAME->jobid);
|
|
|
|
/* if it did change, then the pidmap will be going
|
|
|
|
* to new daemons - so we need to include everything.
|
|
|
|
* also include everything if we were asked to do so
|
|
|
|
*/
|
|
|
|
if (jdata->updated || !update) {
|
|
|
|
include_all = true;
|
|
|
|
} else {
|
|
|
|
include_all = false;
|
|
|
|
}
|
|
|
|
|
2009-03-03 19:39:13 +03:00
|
|
|
for (j=1; j < orte_job_data->size; j++) {
|
|
|
|
/* the job array is no longer left-justified and may
|
|
|
|
* have holes in it as we recover resources at job
|
|
|
|
* completion
|
|
|
|
*/
|
2009-04-13 23:06:54 +04:00
|
|
|
if (NULL == (jdata = (orte_job_t*)opal_pointer_array_get_item(orte_job_data, j))) {
|
2009-03-03 19:39:13 +03:00
|
|
|
continue;
|
2010-03-26 01:54:57 +03:00
|
|
|
}
|
|
|
|
/* if this job doesn't have a map, then it is a tool
|
|
|
|
* and doesn't need to be included
|
|
|
|
*/
|
|
|
|
if (NULL == jdata->map) {
|
|
|
|
continue;
|
2010-05-14 22:44:49 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
/* if this job has already terminated, then ignore it */
|
|
|
|
if (ORTE_JOB_STATE_TERMINATED < jdata->state) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* if we want an update version and there is nothing to update, ignore it */
|
2012-08-29 07:11:37 +04:00
|
|
|
if (!include_all && !jdata->updated) {
|
2012-08-29 01:20:17 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* flag that we included it so we don't do so again */
|
|
|
|
jdata->updated = false;
|
2008-11-18 18:35:50 +03:00
|
|
|
/* pack the jobid */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &jdata->jobid, 1, ORTE_JOBID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2009-07-15 23:36:53 +04:00
|
|
|
goto cleanup_and_return;
|
2008-11-18 18:35:50 +03:00
|
|
|
}
|
|
|
|
/* pack the number of procs */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &jdata->num_procs, 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2009-07-15 23:36:53 +04:00
|
|
|
goto cleanup_and_return;
|
2008-11-18 18:35:50 +03:00
|
|
|
}
|
2013-11-14 21:01:43 +04:00
|
|
|
/* pack the offset */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &jdata->offset, 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
/* cycle thru the job's procs, including only those that have
|
|
|
|
* been updated so we minimize the amount of info being sent
|
|
|
|
*/
|
|
|
|
for (i=0; i < jdata->procs->size; i++) {
|
2009-05-12 13:46:52 +04:00
|
|
|
if (NULL == (proc = (orte_proc_t *) opal_pointer_array_get_item(jdata->procs, i))) {
|
2009-05-11 07:24:49 +04:00
|
|
|
continue;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (!proc->updated) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->name.vpid, 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &(proc->node->daemon->name.vpid), 1, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->local_rank, 1, ORTE_LOCAL_RANK))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->node_rank, 1, ORTE_NODE_RANK))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
2009-07-15 23:36:53 +04:00
|
|
|
}
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
2012-08-30 00:35:52 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->cpu_bitmap, 1, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#endif
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->state, 1, ORTE_PROC_STATE))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->app_idx, 1, ORTE_APP_IDX))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
2012-10-30 03:11:30 +04:00
|
|
|
}
|
2013-11-14 21:01:43 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->do_not_barrier, 1, OPAL_BOOL))) {
|
2012-10-30 03:11:30 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
2012-08-29 01:20:17 +04:00
|
|
|
}
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &proc->restarts, 1, OPAL_INT32))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
/* pack an invalid vpid to flag the end of this job data */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &ORTE_NAME_INVALID->vpid, 1, ORTE_VPID))) {
|
2012-04-29 04:10:01 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
2012-10-30 03:11:30 +04:00
|
|
|
/* if there is a file map, then include it */
|
2012-11-10 18:09:12 +04:00
|
|
|
if (NULL != jdata->file_maps) {
|
2012-10-30 03:11:30 +04:00
|
|
|
flag = 1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &flag, 1, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
2012-11-10 18:09:12 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &jdata->file_maps, 1, OPAL_BUFFER))) {
|
2012-10-30 03:11:30 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
flag = 0;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.pack(&buf, &flag, 1, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup_and_return;
|
|
|
|
}
|
|
|
|
}
|
2008-09-25 17:39:08 +04:00
|
|
|
}
|
2008-04-30 23:49:53 +04:00
|
|
|
|
|
|
|
/* transfer the payload to the byte object */
|
|
|
|
opal_dss.unload(&buf, (void**)&boptr->bytes, &boptr->size);
|
2009-07-15 23:36:53 +04:00
|
|
|
|
|
|
|
cleanup_and_return:
|
2008-04-30 23:49:53 +04:00
|
|
|
OBJ_DESTRUCT(&buf);
|
|
|
|
|
2009-07-15 23:36:53 +04:00
|
|
|
return rc;
|
2008-04-30 23:49:53 +04:00
|
|
|
}
|
|
|
|
|
2012-05-27 20:21:38 +04:00
|
|
|
/* only APPS call this function - daemons have their own */
|
2009-01-07 17:58:38 +03:00
|
|
|
int orte_util_decode_pidmap(opal_byte_object_t *bo)
|
2008-04-30 23:49:53 +04:00
|
|
|
{
|
2014-04-30 01:49:23 +04:00
|
|
|
orte_vpid_t num_procs, offset;
|
2012-08-29 01:20:17 +04:00
|
|
|
orte_local_rank_t local_rank;
|
|
|
|
orte_node_rank_t node_rank;
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
2012-08-30 00:35:52 +04:00
|
|
|
char *cpu_bitmap;
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#endif
|
2008-04-30 23:49:53 +04:00
|
|
|
orte_std_cntr_t n;
|
|
|
|
opal_buffer_t buf;
|
2008-09-25 17:39:08 +04:00
|
|
|
int rc;
|
2012-08-29 01:20:17 +04:00
|
|
|
orte_proc_state_t state;
|
|
|
|
orte_app_idx_t app_idx;
|
|
|
|
int32_t restarts;
|
2012-06-27 18:53:55 +04:00
|
|
|
orte_process_name_t proc, dmn;
|
2012-10-30 03:11:30 +04:00
|
|
|
uint8_t flag;
|
2012-11-10 18:09:12 +04:00
|
|
|
opal_buffer_t *bptr;
|
2012-10-30 03:11:30 +04:00
|
|
|
bool barrier;
|
2014-04-30 01:49:23 +04:00
|
|
|
opal_list_t myvals;
|
|
|
|
opal_value_t kv, *kvp;
|
2012-06-27 18:53:55 +04:00
|
|
|
|
2008-04-30 23:49:53 +04:00
|
|
|
/* xfer the byte object to a buffer for unpacking */
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
2008-09-25 17:39:08 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.load(&buf, bo->bytes, bo->size))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2008-11-24 20:57:55 +03:00
|
|
|
goto cleanup;
|
2008-09-25 17:39:08 +04:00
|
|
|
}
|
2014-05-15 12:28:53 +04:00
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* set the daemon jobid */
|
|
|
|
dmn.jobid = ORTE_DAEMON_JOBID(ORTE_PROC_MY_NAME->jobid);
|
|
|
|
|
2008-11-18 18:35:50 +03:00
|
|
|
n = 1;
|
2008-11-24 20:57:55 +03:00
|
|
|
/* cycle through the buffer */
|
2013-10-02 05:46:09 +04:00
|
|
|
orte_process_info.num_local_peers = 0;
|
2012-06-27 18:53:55 +04:00
|
|
|
while (ORTE_SUCCESS == (rc = opal_dss.unpack(&buf, &proc.jobid, &n, ORTE_JOBID))) {
|
2012-07-04 04:04:16 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
|
|
|
"%s orte:util:decode:pidmap working job %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
|
|
|
ORTE_JOBID_PRINT(proc.jobid)));
|
2012-06-27 18:53:55 +04:00
|
|
|
|
|
|
|
/* unpack and store the number of procs */
|
2008-11-18 18:35:50 +03:00
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &num_procs, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2008-11-24 20:57:55 +03:00
|
|
|
goto cleanup;
|
2008-11-18 18:35:50 +03:00
|
|
|
}
|
2012-06-27 18:53:55 +04:00
|
|
|
proc.vpid = ORTE_VPID_INVALID;
|
2013-09-27 04:37:49 +04:00
|
|
|
/* only useful to ourselves */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_NPROCS);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = num_procs;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-11-14 21:01:43 +04:00
|
|
|
/* unpack and store the offset */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &offset, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
/* only of possible use to ourselves */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_NPROCS);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = offset;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2013-11-14 21:01:43 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-11-14 21:01:43 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-08-29 01:20:17 +04:00
|
|
|
/* cycle thru the data until we hit an INVALID vpid indicating
|
|
|
|
* all data for this job has been read
|
|
|
|
*/
|
|
|
|
n=1;
|
|
|
|
while (OPAL_SUCCESS == (rc = opal_dss.unpack(&buf, &proc.vpid, &n, ORTE_VPID))) {
|
|
|
|
if (ORTE_VPID_INVALID == proc.vpid) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &dmn.vpid, &n, ORTE_VPID))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &local_rank, &n, ORTE_LOCAL_RANK))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &node_rank, &n, ORTE_NODE_RANK))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
2012-08-30 00:35:52 +04:00
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &cpu_bitmap, &n, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
#endif
|
2012-06-27 18:53:55 +04:00
|
|
|
if (proc.jobid == ORTE_PROC_MY_NAME->jobid &&
|
2012-08-29 01:20:17 +04:00
|
|
|
proc.vpid == ORTE_PROC_MY_NAME->vpid) {
|
|
|
|
/* set mine */
|
|
|
|
orte_process_info.my_local_rank = local_rank;
|
|
|
|
orte_process_info.my_node_rank = node_rank;
|
2014-02-18 04:32:58 +04:00
|
|
|
/* if we are the local leader (i.e., local_rank=0), then record it */
|
|
|
|
if (0 == local_rank) {
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(OPAL_DSTORE_LOCALLDR);
|
|
|
|
kv.type = OPAL_UINT64;
|
|
|
|
kv.data.uint64 = *(opal_identifier_t*)&proc;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)ORTE_PROC_MY_NAME, &kv))) {
|
2014-02-18 04:32:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2014-02-18 04:32:58 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2014-02-18 04:32:58 +04:00
|
|
|
}
|
2013-03-26 23:14:23 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
if (NULL != cpu_bitmap) {
|
|
|
|
orte_process_info.cpuset = strdup(cpu_bitmap);
|
|
|
|
}
|
|
|
|
#endif
|
2013-10-02 05:46:09 +04:00
|
|
|
} else if (proc.jobid == ORTE_PROC_MY_NAME->jobid &&
|
|
|
|
dmn.vpid == ORTE_PROC_MY_DAEMON->vpid) {
|
|
|
|
/* if we share a daemon, then add to my local peers */
|
|
|
|
orte_process_info.num_local_peers++;
|
2014-02-18 04:32:58 +04:00
|
|
|
/* if this is the local leader (i.e., local_rank=0), then record it */
|
|
|
|
if (0 == local_rank) {
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(OPAL_DSTORE_LOCALLDR);
|
|
|
|
kv.type = OPAL_UINT64;
|
|
|
|
kv.data.uint64 = *(opal_identifier_t*)&proc;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)ORTE_PROC_MY_NAME, &kv))) {
|
2014-02-18 04:32:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2014-02-18 04:32:58 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2014-02-18 04:32:58 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
}
|
|
|
|
/* apps don't need the rest of the data in the buffer for this proc,
|
|
|
|
* but we have to unpack it anyway to stay in sync
|
|
|
|
*/
|
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &state, &n, ORTE_PROC_STATE))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &app_idx, &n, ORTE_APP_IDX))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
2012-10-30 03:11:30 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &barrier, &n, OPAL_BOOL))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
2012-08-29 01:20:17 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &restarts, &n, OPAL_INT32))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2013-09-27 04:37:49 +04:00
|
|
|
/* store the values in the database - again, these are for our own internal use */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(OPAL_DSTORE_LOCALRANK);
|
|
|
|
kv.type = OPAL_UINT16;
|
|
|
|
kv.data.uint16 = local_rank;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-06-27 18:53:55 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_NODERANK);
|
|
|
|
kv.type = OPAL_UINT16;
|
|
|
|
kv.data.uint16 = node_rank;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2012-06-27 18:53:55 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2008-11-24 20:57:55 +03:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2011-11-18 14:22:58 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
2013-03-26 23:14:23 +04:00
|
|
|
if (NULL != cpu_bitmap) {
|
2014-04-30 22:12:48 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(OPAL_DSTORE_CPUSET);
|
|
|
|
kv.type = OPAL_STRING;
|
|
|
|
kv.data.string = strdup(cpu_bitmap);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 23:29:00 +04:00
|
|
|
/* also need a copy in nonpeer to support dynamic spawns */
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
OBJ_DESTRUCT(&kv);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 22:12:48 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-03-26 23:14:23 +04:00
|
|
|
free(cpu_bitmap);
|
|
|
|
}
|
2012-06-27 18:53:55 +04:00
|
|
|
#endif
|
2012-08-30 00:35:52 +04:00
|
|
|
/* we don't need to store the rest of the values
|
|
|
|
* for ourself in the database
|
|
|
|
* as we already did so during startup
|
|
|
|
*/
|
|
|
|
if (proc.jobid != ORTE_PROC_MY_NAME->jobid ||
|
|
|
|
proc.vpid != ORTE_PROC_MY_NAME->vpid) {
|
2013-09-27 04:37:49 +04:00
|
|
|
/* store the data for this proc - the location of a proc is something
|
|
|
|
* we would potentially need to share with a non-peer
|
|
|
|
*/
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_DAEMON_VPID);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = dmn.vpid;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2012-08-30 00:35:52 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-08-30 00:35:52 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
/* if coprocessors were detected, lookup and store the hostid for this proc */
|
|
|
|
if (orte_coprocessors_detected) {
|
|
|
|
/* lookup the hostid for this daemon */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&myvals, opal_list_t);
|
2014-05-07 23:29:12 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.fetch(opal_dstore_nonpeer,
|
2014-04-30 01:49:23 +04:00
|
|
|
(opal_identifier_t*)&dmn,
|
|
|
|
ORTE_DB_HOSTID, &myvals))) {
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
kvp = (opal_value_t*)opal_list_get_first(&myvals);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
OPAL_OUTPUT_VERBOSE((2, orte_nidmap_output,
|
|
|
|
"%s FOUND HOSTID %s FOR DAEMON %s",
|
|
|
|
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
|
2014-04-30 01:49:23 +04:00
|
|
|
ORTE_VPID_PRINT(kvp->data.uint32), ORTE_VPID_PRINT(dmn.vpid)));
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
/* store it as hostid for this proc */
|
2014-04-30 01:49:23 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&proc, kvp))) {
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
****************************************************************
This change contains a non-mandatory modification
of the MPI-RTE interface. Anyone wishing to support
coprocessors such as the Xeon Phi may wish to add
the required definition and underlying support
****************************************************************
Add locality support for coprocessors such as the Intel Xeon Phi.
Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host.
So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following:
1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board
2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions
3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future.
4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time.
5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored.
6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set.
cmr:v1.7.4:reviewer=hjelmn
This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
|
|
|
}
|
2013-08-22 07:40:26 +04:00
|
|
|
/* lookup and store the hostname for this proc */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&myvals, opal_list_t);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.fetch(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&dmn,
|
|
|
|
ORTE_DB_HOSTNAME, &myvals))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
2013-08-22 07:40:26 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
kvp = (opal_value_t*)opal_list_get_first(&myvals);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&proc, kvp))) {
|
2013-08-22 07:40:26 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
2013-08-22 07:40:26 +04:00
|
|
|
goto cleanup;
|
2012-08-30 00:35:52 +04:00
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OPAL_LIST_DESTRUCT(&myvals);
|
2013-11-14 21:01:43 +04:00
|
|
|
/* store this procs global rank - only used by us */
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_GLOBAL_RANK);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = proc.vpid + offset;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_internal,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2013-11-14 21:01:43 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-11-14 21:01:43 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-11-14 21:01:43 +04:00
|
|
|
} else {
|
|
|
|
/* update our own global rank - this is something we will need
|
|
|
|
* to share with non-peers
|
|
|
|
*/
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_CONSTRUCT(&kv, opal_value_t);
|
|
|
|
kv.key = strdup(ORTE_DB_GLOBAL_RANK);
|
|
|
|
kv.type = OPAL_UINT32;
|
|
|
|
kv.data.uint32 = proc.vpid + offset;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dstore.store(opal_dstore_nonpeer,
|
|
|
|
(opal_identifier_t*)&proc, &kv))) {
|
2013-11-14 21:01:43 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2013-11-14 21:01:43 +04:00
|
|
|
goto cleanup;
|
|
|
|
}
|
2014-04-30 01:49:23 +04:00
|
|
|
OBJ_DESTRUCT(&kv);
|
2012-08-30 00:35:52 +04:00
|
|
|
}
|
2008-11-18 18:35:50 +03:00
|
|
|
}
|
2012-11-10 18:09:12 +04:00
|
|
|
/* see if there is a file map */
|
2012-10-30 03:11:30 +04:00
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &flag, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
if (0 != flag) {
|
2012-11-10 18:09:12 +04:00
|
|
|
/* unpack it and discard */
|
2012-10-30 03:11:30 +04:00
|
|
|
n=1;
|
2012-11-10 18:09:12 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &bptr, &n, OPAL_BUFFER))) {
|
2012-10-30 03:11:30 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2012-11-10 18:09:12 +04:00
|
|
|
OBJ_RELEASE(bptr);
|
2012-10-30 03:11:30 +04:00
|
|
|
}
|
2008-11-24 20:57:55 +03:00
|
|
|
/* setup for next cycle */
|
|
|
|
n = 1;
|
|
|
|
}
|
2012-05-27 22:37:57 +04:00
|
|
|
if (ORTE_ERR_UNPACK_READ_PAST_END_OF_BUFFER != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
2008-04-30 23:49:53 +04:00
|
|
|
}
|
2013-09-27 04:37:49 +04:00
|
|
|
rc = ORTE_SUCCESS;
|
2013-10-02 05:46:09 +04:00
|
|
|
|
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement
The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation.
In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions:
1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior.
2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation.
3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so.
As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes.
This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
|
|
|
cleanup:
|
2008-04-30 23:49:53 +04:00
|
|
|
OBJ_DESTRUCT(&buf);
|
2008-11-24 20:57:55 +03:00
|
|
|
return rc;
|
2008-04-30 23:49:53 +04:00
|
|
|
}
|
|
|
|
|
2012-10-30 03:11:30 +04:00
|
|
|
static void fm_release(void *cbdata)
|
|
|
|
{
|
2012-11-10 18:09:12 +04:00
|
|
|
opal_buffer_t *bptr = (opal_buffer_t*)cbdata;
|
2012-10-30 03:11:30 +04:00
|
|
|
|
2012-11-10 18:09:12 +04:00
|
|
|
OBJ_RELEASE(bptr);
|
2012-10-30 03:11:30 +04:00
|
|
|
}
|
|
|
|
|
2012-04-29 04:10:01 +04:00
|
|
|
int orte_util_decode_daemon_pidmap(opal_byte_object_t *bo)
|
|
|
|
{
|
|
|
|
orte_jobid_t jobid;
|
2012-08-29 01:20:17 +04:00
|
|
|
orte_vpid_t vpid, num_procs, dmn;
|
|
|
|
orte_local_rank_t local_rank;
|
|
|
|
orte_node_rank_t node_rank;
|
2012-04-29 04:10:01 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
2012-08-30 00:35:52 +04:00
|
|
|
char *cpu_bitmap;
|
2012-04-29 04:10:01 +04:00
|
|
|
#endif
|
|
|
|
orte_std_cntr_t n;
|
|
|
|
opal_buffer_t buf;
|
2012-05-03 01:00:22 +04:00
|
|
|
int rc, j, k;
|
2012-08-14 22:17:59 +04:00
|
|
|
orte_job_t *jdata, *daemons;
|
2012-05-01 20:41:35 +04:00
|
|
|
orte_proc_t *proc, *pptr;
|
2012-05-03 01:00:22 +04:00
|
|
|
orte_node_t *node, *nptr;
|
2012-08-29 01:20:17 +04:00
|
|
|
orte_proc_state_t state;
|
|
|
|
orte_app_idx_t app_idx;
|
|
|
|
int32_t restarts;
|
2012-05-03 01:00:22 +04:00
|
|
|
orte_job_map_t *map;
|
|
|
|
bool found;
|
2012-10-30 03:11:30 +04:00
|
|
|
uint8_t flag;
|
2012-11-10 18:09:12 +04:00
|
|
|
opal_buffer_t *bptr;
|
2012-10-30 03:11:30 +04:00
|
|
|
bool barrier;
|
2012-04-29 04:10:01 +04:00
|
|
|
|
|
|
|
/* xfer the byte object to a buffer for unpacking */
|
|
|
|
OBJ_CONSTRUCT(&buf, opal_buffer_t);
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.load(&buf, bo->bytes, bo->size))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2012-08-14 22:17:59 +04:00
|
|
|
daemons = orte_get_job_data_object(ORTE_PROC_MY_NAME->jobid);
|
|
|
|
|
2012-04-29 04:10:01 +04:00
|
|
|
n = 1;
|
|
|
|
/* cycle through the buffer */
|
|
|
|
while (ORTE_SUCCESS == (rc = opal_dss.unpack(&buf, &jobid, &n, ORTE_JOBID))) {
|
|
|
|
/* see if we have this job object - could be a restart scenario */
|
|
|
|
if (NULL == (jdata = orte_get_job_data_object(jobid))) {
|
|
|
|
/* need to create this job */
|
|
|
|
jdata = OBJ_NEW(orte_job_t);
|
|
|
|
jdata->jobid = jobid;
|
|
|
|
opal_pointer_array_set_item(orte_job_data, ORTE_LOCAL_JOBID(jobid), jdata);
|
|
|
|
}
|
|
|
|
|
2012-09-07 00:50:07 +04:00
|
|
|
/* setup the map */
|
|
|
|
map = jdata->map;
|
|
|
|
if (NULL == map) {
|
|
|
|
jdata->map = OBJ_NEW(orte_job_map_t);
|
|
|
|
map = jdata->map;
|
|
|
|
}
|
|
|
|
|
2012-04-29 04:10:01 +04:00
|
|
|
/* unpack the number of procs */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &num_procs, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
jdata->num_procs = num_procs;
|
|
|
|
|
2013-11-14 21:01:43 +04:00
|
|
|
/* unpack the offset */
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &num_procs, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
jdata->offset = num_procs;
|
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* cycle thru the data until we hit an INVALID vpid indicating
|
|
|
|
* all data for this job has been read
|
|
|
|
*/
|
|
|
|
n=1;
|
|
|
|
while (OPAL_SUCCESS == (rc = opal_dss.unpack(&buf, &vpid, &n, ORTE_VPID))) {
|
|
|
|
if (ORTE_VPID_INVALID == vpid) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &dmn, &n, ORTE_VPID))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &local_rank, &n, ORTE_LOCAL_RANK))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &node_rank, &n, ORTE_NODE_RANK))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
#if OPAL_HAVE_HWLOC
|
2012-08-30 00:35:52 +04:00
|
|
|
n=1;
|
|
|
|
if (ORTE_SUCCESS != (rc = opal_dss.unpack(&buf, &cpu_bitmap, &n, OPAL_STRING))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
#endif
|
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &state, &n, ORTE_PROC_STATE))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &app_idx, &n, ORTE_APP_IDX))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
2012-10-30 03:11:30 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &barrier, &n, OPAL_BOOL))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
n=1;
|
2012-08-29 01:20:17 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &restarts, &n, OPAL_INT32))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
/* store the data for this proc */
|
|
|
|
if (NULL == (proc = (orte_proc_t*)opal_pointer_array_get_item(jdata->procs, vpid))) {
|
2012-04-29 04:10:01 +04:00
|
|
|
proc = OBJ_NEW(orte_proc_t);
|
|
|
|
proc->name.jobid = jdata->jobid;
|
2012-08-29 01:20:17 +04:00
|
|
|
proc->name.vpid = vpid;
|
|
|
|
opal_pointer_array_set_item(jdata->procs, vpid, proc);
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-15 02:26:40 +04:00
|
|
|
/* lookup the node - should always be present */
|
2012-08-29 01:20:17 +04:00
|
|
|
if (NULL == (node = (orte_node_t*)opal_pointer_array_get_item(orte_node_pool, dmn))) {
|
2012-05-30 00:11:51 +04:00
|
|
|
/* this should never happen, but protect ourselves anyway */
|
|
|
|
node = OBJ_NEW(orte_node_t);
|
2012-08-15 02:26:40 +04:00
|
|
|
/* get the daemon */
|
2012-08-29 01:20:17 +04:00
|
|
|
if (NULL == (pptr = (orte_proc_t*)opal_pointer_array_get_item(daemons->procs, dmn))) {
|
2012-08-14 22:17:59 +04:00
|
|
|
pptr = OBJ_NEW(orte_proc_t);
|
|
|
|
pptr->name.jobid = ORTE_PROC_MY_NAME->jobid;
|
2012-08-29 01:20:17 +04:00
|
|
|
pptr->name.vpid = dmn;
|
|
|
|
opal_pointer_array_set_item(daemons->procs, dmn, pptr);
|
2012-08-14 22:17:59 +04:00
|
|
|
}
|
|
|
|
node->daemon = pptr;
|
2012-08-29 01:20:17 +04:00
|
|
|
opal_pointer_array_set_item(orte_node_pool, dmn, node);
|
2012-05-30 00:11:51 +04:00
|
|
|
}
|
2012-04-29 04:10:01 +04:00
|
|
|
if (NULL != proc->node) {
|
2012-05-01 20:41:35 +04:00
|
|
|
if (node != proc->node) {
|
|
|
|
/* proc has moved - cleanup the prior node proc array */
|
|
|
|
for (j=0; j < proc->node->procs->size; j++) {
|
|
|
|
if (NULL == (pptr = (orte_proc_t*)opal_pointer_array_get_item(proc->node->procs, j))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (pptr == proc) {
|
|
|
|
/* maintain accounting */
|
|
|
|
OBJ_RELEASE(pptr);
|
|
|
|
opal_pointer_array_set_item(proc->node->procs, j, NULL);
|
|
|
|
proc->node->num_procs--;
|
2012-05-03 01:00:22 +04:00
|
|
|
if (0 == proc->node->num_procs) {
|
|
|
|
/* remove node from the map */
|
|
|
|
for (k=0; k < map->nodes->size; k++) {
|
|
|
|
if (NULL == (nptr = (orte_node_t*)opal_pointer_array_get_item(map->nodes, k))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (nptr == proc->node) {
|
|
|
|
/* maintain accounting */
|
|
|
|
OBJ_RELEASE(nptr);
|
|
|
|
opal_pointer_array_set_item(map->nodes, k, NULL);
|
|
|
|
map->num_nodes--;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-05-01 20:41:35 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-04-29 04:10:01 +04:00
|
|
|
OBJ_RELEASE(proc->node);
|
|
|
|
}
|
2012-05-03 01:00:22 +04:00
|
|
|
/* see if this node is already in the map */
|
|
|
|
found = false;
|
|
|
|
for (j=0; j < map->nodes->size; j++) {
|
|
|
|
if (NULL == (nptr = (orte_node_t*)opal_pointer_array_get_item(map->nodes, j))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (nptr == node) {
|
|
|
|
found = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!found) {
|
|
|
|
opal_pointer_array_add(map->nodes, node);
|
|
|
|
map->num_nodes++;
|
|
|
|
}
|
2012-05-01 20:41:35 +04:00
|
|
|
/* add the node to the proc */
|
2012-04-29 04:10:01 +04:00
|
|
|
OBJ_RETAIN(node);
|
|
|
|
proc->node = node;
|
2012-05-01 20:41:35 +04:00
|
|
|
/* add the proc to the node */
|
|
|
|
OBJ_RETAIN(proc);
|
|
|
|
opal_pointer_array_add(node->procs, proc);
|
|
|
|
/* update proc values */
|
2012-08-29 01:20:17 +04:00
|
|
|
proc->local_rank = local_rank;
|
|
|
|
proc->node_rank = node_rank;
|
|
|
|
proc->app_idx = app_idx;
|
2012-10-30 03:11:30 +04:00
|
|
|
proc->do_not_barrier = barrier;
|
2012-08-29 01:20:17 +04:00
|
|
|
proc->restarts = restarts;
|
|
|
|
proc->state = state;
|
2012-08-30 00:35:52 +04:00
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
proc->cpu_bitmap = cpu_bitmap;
|
|
|
|
#endif
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-10-30 03:11:30 +04:00
|
|
|
/* see if we have a file map for this job */
|
|
|
|
n=1;
|
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &flag, &n, OPAL_UINT8))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
if (0 != flag) {
|
|
|
|
/* yep - retrieve and load it */
|
|
|
|
n=1;
|
2012-11-10 18:09:12 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = opal_dss.unpack(&buf, &bptr, &n, OPAL_BUFFER))) {
|
2012-10-30 03:11:30 +04:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
if (NULL != orte_dfs.load_file_maps) {
|
2012-11-10 18:09:12 +04:00
|
|
|
orte_dfs.load_file_maps(jdata->jobid, bptr, fm_release, bptr);
|
2012-10-30 03:11:30 +04:00
|
|
|
}
|
|
|
|
}
|
2012-04-29 04:10:01 +04:00
|
|
|
/* setup for next cycle */
|
|
|
|
n = 1;
|
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_ERR_UNPACK_READ_PAST_END_OF_BUFFER != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
goto cleanup;
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
rc = ORTE_SUCCESS;
|
2012-04-29 04:10:01 +04:00
|
|
|
|
2012-08-29 01:20:17 +04:00
|
|
|
/* update our global pidmap object for sending
|
|
|
|
* to procs
|
|
|
|
*/
|
|
|
|
if (NULL != orte_pidmap.bytes) {
|
|
|
|
free(orte_pidmap.bytes);
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
if (ORTE_SUCCESS != (rc = orte_util_encode_pidmap(&orte_pidmap, false))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
2012-04-29 04:10:01 +04:00
|
|
|
}
|
2012-08-29 01:20:17 +04:00
|
|
|
|
|
|
|
cleanup:
|
2012-04-29 04:10:01 +04:00
|
|
|
OBJ_DESTRUCT(&buf);
|
|
|
|
return rc;
|
|
|
|
}
|