2005-06-17 21:12:50 +00:00
|
|
|
/*
|
2005-11-05 19:57:48 +00:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2005-06-17 21:12:50 +00:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "orte_config.h"
|
2005-07-27 23:18:16 +00:00
|
|
|
|
2005-07-28 15:14:46 +00:00
|
|
|
#ifdef HAVE_UNISTD_H
|
|
|
|
#include <unistd.h>
|
|
|
|
#endif
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <errno.h>
|
|
|
|
|
2005-07-04 01:36:20 +00:00
|
|
|
#include "opal/util/opal_environ.h"
|
2005-07-14 05:02:53 +00:00
|
|
|
#include "opal/util/output.h"
|
2005-07-27 23:18:16 +00:00
|
|
|
#include "opal/mca/base/mca_base_param.h"
|
2006-02-12 01:33:29 +00:00
|
|
|
#include "orte/orte_constants.h"
|
2005-07-27 23:18:16 +00:00
|
|
|
#include "orte/mca/sds/base/base.h"
|
|
|
|
#include "orte/mca/ns/base/base.h"
|
|
|
|
#include "orte/mca/ns/ns.h"
|
|
|
|
#include "orte/mca/errmgr/base/base.h"
|
2005-06-17 21:12:50 +00:00
|
|
|
|
Squeeeeeeze the launch message. This is the message sent to the daemons that provides all the data required for launching their local procs. In reorganizing the ODLS framework, I discovered that we were sending a significant amount of unnecessary and repeated data. This commit resolves this by:
1. taking advantage of the fact that we no longer create the launch message via a GPR trigger. In earlier times, we had the GPR create the launch message based on a subscription. In that mode of operation, we could not guarantee the order in which the data was stored in the message - hence, we had no choice but to parse the message in a loop that checked each value against a list of possible "keys" until the corresponding value was found.
Now, however, we construct the message "by hand", so we know precisely what data is in each location in the message. Thus, we no longer need to send the character string "keys" for each data value any more. This represents a rather large savings in the message size - to give you an example, we typically would use a 30-char "key" for a 2-byte data value. As you can see, the overhead can become very large.
2. sending node-specific data only once. Again, because we used to construct the message via subscriptions that were done on a per-proc basis, the data for each node (e.g., the daemon's name, whether or not the node was oversubscribed) would be included in the data for each proc. Thus, the node-specific data was repeated for every proc.
Now that we construct the message "by hand", there is no reason to do this any more. Instead, we can insert the data for a specific node only once, and then provide the per-proc data for that node. We therefore not only save all that extra data in the message, but we also only need to parse the per-node data once.
The savings become significant at scale. Here is a comparison between the revised trunk and the trunk prior to this commit (all data was taken on odin, using openib, 64 nodes, unity message routing, tested with application consisting of mpi_init/mpi_barrier/mpi_finalize, all execution times given in seconds, all launch message sizes in bytes):
Per-node scaling, taken at 1ppn:
#nodes original trunk revised trunk
time size time size
1 0.10 819 0.09 564
2 0.14 1070 0.14 677
3 0.15 1321 0.14 790
4 0.15 1572 0.15 903
8 0.17 2576 0.20 1355
16 0.25 4584 0.21 2259
32 0.28 8600 0.27 4067
64 0.50 16632 0.39 7683
Per-proc scaling, taken at 64 nodes
ppn original trunk revised trunk
time size time size
1 0.50 16669 0.40 7720
2 0.55 32733 0.54 11048
3 0.87 48797 0.81 14376
4 1.0 64861 0.85 17704
Condensing those numbers, it appears we gained:
per-node message size: 251 bytes/node -> 113 bytes/node
per-proc message size: 251 bytes/proc -> 52 bytes/proc
per-job message size: 568 bytes/job -> 399 bytes/job
(job-specific data such as jobid, override oversubscribe flag, total #procs in job, total slots allocated)
The fact that the two pre-commit trunk numbers are the same confirms the fact that each proc was containing the node data as well. It isn't quite the 10x message reduction I had hoped to get, but it is significant and gives much better scaling.
Note that the timing info was, as usual, pretty chaotic - the numbers cited here were typical across several runs taken after the initial one to avoid NFS file positioning influences.
Also note that this commit removes the orte_process_info.vpid_start field and the handful of places that passed that useless value. By definition, all jobs start at vpid=0, so all we were doing is passing "0" around. In fact, many places simply hardwired it to "0" anyway rather than deal with it.
This commit was SVN r16428.
2007-10-11 15:57:26 +00:00
|
|
|
int orte_ns_nds_env_put(orte_std_cntr_t num_procs,
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
orte_std_cntr_t num_local_procs,
|
|
|
|
char ***env)
|
2005-07-27 23:18:16 +00:00
|
|
|
{
|
|
|
|
char* param;
|
|
|
|
char* value;
|
2005-06-17 21:12:50 +00:00
|
|
|
|
2005-07-27 23:18:16 +00:00
|
|
|
/* set the mode to env */
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds",NULL))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, "env", true, env);
|
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* not a seed */
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("seed",NULL,NULL))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_unsetenv(param, env);
|
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* since we want to pass the name as separate components, make sure
|
|
|
|
* that the "name" environmental variable is cleared!
|
|
|
|
*/
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","name"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_unsetenv(param, env);
|
|
|
|
free(param);
|
|
|
|
|
|
|
|
asprintf(&value, "%lu", (unsigned long) num_procs);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_procs"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
|
|
|
|
asprintf(&value, "%lu", (unsigned long) num_local_procs);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_local_procs"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
2005-06-17 21:12:50 +00:00
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2005-08-08 22:17:22 +00:00
|
|
|
/**
|
|
|
|
* sets up the environment so that a process launched with the bproc launcher can
|
|
|
|
* figure out its name
|
|
|
|
* @param cell the cell that the process belongs to.
|
|
|
|
* @param job the job the process belongs to
|
|
|
|
* @param vpid_start the starting vpid for the current parallel launch
|
|
|
|
* @param global_vpid_start the starting vpid for the job
|
|
|
|
* @param num_procs the number of user processes in the job
|
|
|
|
* @param env a pointer to the environment to setup
|
|
|
|
* @retval ORTE_SUCCESS
|
|
|
|
* @retval error
|
|
|
|
*/
|
2007-07-20 02:34:29 +00:00
|
|
|
int orte_ns_nds_bproc_put(orte_jobid_t job,
|
2005-06-17 21:12:50 +00:00
|
|
|
orte_vpid_t vpid_start, orte_vpid_t global_vpid_start,
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
orte_std_cntr_t num_procs,
|
|
|
|
orte_vpid_t local_rank,
|
|
|
|
orte_std_cntr_t num_local_procs,
|
|
|
|
char ***env)
|
|
|
|
{
|
2005-06-17 21:12:50 +00:00
|
|
|
char* param;
|
|
|
|
char* value;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
/* set the mode to bproc */
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds",NULL))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_setenv(param, "bproc", true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* not a seed */
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("seed",NULL,NULL))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_unsetenv(param, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* since we want to pass the name as separate components, make sure
|
|
|
|
* that the "name" environmental variable is cleared!
|
|
|
|
*/
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","name"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_unsetenv(param, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* setup the name */
|
2005-08-02 22:22:55 +00:00
|
|
|
if(ORTE_SUCCESS != (rc = orte_ns.convert_jobid_to_string(&value, job))) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2005-06-17 21:12:50 +00:00
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","jobid"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-08-02 22:22:55 +00:00
|
|
|
opal_setenv(param, value, true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
2005-08-02 22:22:55 +00:00
|
|
|
free(value);
|
2005-06-17 21:12:50 +00:00
|
|
|
|
|
|
|
rc = orte_ns.convert_vpid_to_string(&value, vpid_start);
|
|
|
|
if (ORTE_SUCCESS != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return(rc);
|
|
|
|
}
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","vpid_start"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_setenv(param, value, true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
|
|
|
rc = orte_ns.convert_vpid_to_string(&value, global_vpid_start);
|
|
|
|
if (ORTE_SUCCESS != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return(rc);
|
|
|
|
}
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","global_vpid_start"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_setenv(param, value, true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
asprintf(&value, "%d", (int)num_procs);
|
2005-06-17 21:12:50 +00:00
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_procs")))
|
|
|
|
{
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_setenv(param, value, true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
asprintf(&value, "%lu", (unsigned long) local_rank);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","local_rank"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
|
|
|
asprintf(&value, "%lu", (unsigned long) num_local_procs);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_local_procs"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
2005-06-17 21:12:50 +00:00
|
|
|
/* we have to set this environmental variable so bproc will give us our rank
|
|
|
|
* after the launch */
|
2005-06-26 22:00:25 +00:00
|
|
|
|
|
|
|
putenv("BPROC_RANK=XXXXXXX");
|
2005-07-04 01:36:20 +00:00
|
|
|
opal_setenv("BPROC_RANK", "XXXXXXX", true, env);
|
2005-06-17 21:12:50 +00:00
|
|
|
|
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2006-12-21 00:05:36 +00:00
|
|
|
/**
|
|
|
|
* sets up the environment so that a process launched with the xcpu launcher can
|
|
|
|
* figure out its name
|
|
|
|
* @param cell the cell that the process belongs to.
|
|
|
|
* @param job the job the process belongs to
|
|
|
|
* @param vpid_start the starting vpid for the current parallel launch
|
|
|
|
* @param global_vpid_start the starting vpid for the job
|
|
|
|
* @param num_procs the number of user processes in the job
|
|
|
|
* @param env a pointer to the environment to setup
|
|
|
|
* @retval ORTE_SUCCESS
|
|
|
|
* @retval error
|
|
|
|
*/
|
2007-07-20 02:34:29 +00:00
|
|
|
int orte_ns_nds_xcpu_put(orte_jobid_t job,
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
orte_vpid_t vpid_start, orte_std_cntr_t num_procs,
|
|
|
|
orte_vpid_t local_rank,
|
|
|
|
orte_std_cntr_t num_local_procs,
|
|
|
|
char ***env)
|
2006-12-21 00:05:36 +00:00
|
|
|
{
|
|
|
|
char* param;
|
|
|
|
char* value;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
/* set the mode to xcpu */
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds",NULL))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, "xcpu", true, env);
|
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* since we want to pass the name as separate components, make sure
|
|
|
|
* that the "name" environmental variable is cleared!
|
|
|
|
*/
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","name"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_unsetenv(param, env);
|
|
|
|
free(param);
|
|
|
|
|
|
|
|
/* setup the name */
|
2007-07-20 02:34:29 +00:00
|
|
|
if(ORTE_SUCCESS != (rc = orte_ns.convert_jobid_to_string(&value, job))) {
|
2006-12-21 00:05:36 +00:00
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","jobid"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
|
|
|
rc = orte_ns.convert_vpid_to_string(&value, vpid_start);
|
|
|
|
if (ORTE_SUCCESS != rc) {
|
|
|
|
ORTE_ERROR_LOG(rc);
|
|
|
|
return(rc);
|
|
|
|
}
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","vpid_start"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
asprintf(&value, "%d", (int)num_procs);
|
2006-12-21 00:05:36 +00:00
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_procs"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
asprintf(&value, "%lu", (unsigned long) local_rank);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","local_rank"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
|
|
|
asprintf(&value, "%lu", (unsigned long) num_local_procs);
|
|
|
|
if(NULL == (param = mca_base_param_environ_variable("ns","nds","num_local_procs"))) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_OUT_OF_RESOURCE);
|
|
|
|
return ORTE_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
opal_setenv(param, value, true, env);
|
|
|
|
free(param);
|
|
|
|
free(value);
|
|
|
|
|
2006-12-21 00:05:36 +00:00
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
2005-07-28 15:14:46 +00:00
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
int orte_ns_nds_pipe_put(const orte_process_name_t* name,
|
|
|
|
orte_std_cntr_t num_procs,
|
|
|
|
orte_vpid_t local_rank,
|
|
|
|
orte_std_cntr_t num_local_procs,
|
|
|
|
int fd)
|
2005-07-28 15:14:46 +00:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = write(fd,name,sizeof(orte_process_name_t));
|
|
|
|
if(rc != sizeof(orte_process_name_t)) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_BAD_PARAM);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
|
|
|
}
|
|
|
|
|
|
|
|
rc = write(fd,&num_procs, sizeof(num_procs));
|
|
|
|
if(rc != sizeof(num_procs)) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_BAD_PARAM);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
|
|
|
}
|
|
|
|
|
Compute and pass the local_rank and local number of procs (in that proc's job) on the node.
To be precise, given this hypothetical launching pattern:
host1: vpids 0, 2, 4, 6
host2: vpids 1, 3, 5, 7
The local_rank for these procs would be:
host1: vpids 0->local_rank 0, v2->lr1, v4->lr2, v6->lr3
host2: vpids 1->local_rank 0, v3->lr1, v5->lr2, v7->lr3
and the number of local procs on each node would be four. If vpid=0 then does a comm_spawn of one process on host1, the values of the parent job would remain unchanged. The local_rank of the child process would be 0 and its num_local_procs would be 1 since it is in a separate jobid.
I have verified this functionality for the rsh case - need to verify that slurm and other cases also get the right values. Some consolidation of common code is probably going to occur in the SDS components to make this simpler and more maintainable in the future.
This commit was SVN r14706.
2007-05-21 14:30:10 +00:00
|
|
|
rc = write(fd,&local_rank, sizeof(local_rank));
|
|
|
|
if(rc != sizeof(local_rank)) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_BAD_PARAM);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
|
|
|
}
|
|
|
|
|
|
|
|
rc = write(fd,&num_local_procs, sizeof(num_local_procs));
|
|
|
|
if(rc != sizeof(num_local_procs)) {
|
|
|
|
ORTE_ERROR_LOG(ORTE_ERR_BAD_PARAM);
|
|
|
|
return ORTE_ERR_NOT_FOUND;
|
|
|
|
}
|
|
|
|
|
2005-07-28 15:14:46 +00:00
|
|
|
return ORTE_SUCCESS;
|
|
|
|
}
|
|
|
|
|