1
1
openmpi/orte/runtime/orte_globals.c

1080 строки
35 KiB
C
Исходник Обычный вид История

/*
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2004-2011 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
* Copyright (c) 2007-2011 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2009-2010 Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011-2013 Los Alamos National Security, LLC.
* All rights reserved.
* Copyright (c) 2013-2014 Intel, Inc. All rights reserved
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#include "orte_config.h"
#include "orte/constants.h"
#include "orte/types.h"
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
#include "opal/mca/dstore/dstore.h"
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
#include "opal/mca/hwloc/hwloc.h"
#include "opal/util/argv.h"
#include "opal/util/output.h"
#include "opal/class/opal_hash_table.h"
#include "opal/class/opal_pointer_array.h"
#include "opal/class/opal_value_array.h"
#include "opal/dss/dss.h"
#include "opal/threads/threads.h"
This commit represents a bunch of work on a Mercurial side branch. As such, the commit message back to the master SVN repository is fairly long. = ORTE Job-Level Output Messages = Add two new interfaces that should be used for all new code throughout the ORTE and OMPI layers (we already make the search-and-replace on the existing ORTE / OMPI layers): * orte_output(): (and corresponding friends ORTE_OUTPUT, orte_output_verbose, etc.) This function sends the output directly to the HNP for processing as part of a job-specific output channel. It supports all the same outputs as opal_output() (syslog, file, stdout, stderr), but for stdout/stderr, the output is sent to the HNP for processing and output. More on this below. * orte_show_help(): This function is a drop-in-replacement for opal_show_help(), with two differences in functionality: 1. the rendered text help message output is sent to the HNP for display (rather than outputting directly into the process' stderr stream) 1. the HNP detects duplicate help messages and does not display them (so that you don't see the same error message N times, once from each of your N MPI processes); instead, it counts "new" instances of the help message and displays a message every ~5 seconds when there are new ones ("I got X new copies of the help message...") opal_show_help and opal_output still exist, but they only output in the current process. The intent for the new orte_* functions is that they can apply job-level intelligence to the output. As such, we recommend that all new ORTE and OMPI code use the new orte_* functions, not thei opal_* functions. === New code === For ORTE and OMPI programmers, here's what you need to do differently in new code: * Do not include opal/util/show_help.h or opal/util/output.h. Instead, include orte/util/output.h (this one header file has declarations for both the orte_output() series of functions and orte_show_help()). * Effectively s/opal_output/orte_output/gi throughout your code. Note that orte_output_open() takes a slightly different argument list (as a way to pass data to the filtering stream -- see below), so you if explicitly call opal_output_open(), you'll need to slightly adapt to the new signature of orte_output_open(). * Literally s/opal_show_help/orte_show_help/. The function signature is identical. === Notes === * orte_output'ing to stream 0 will do similar to what opal_output'ing did, so leaving a hard-coded "0" as the first argument is safe. * For systems that do not use ORTE's RML or the HNP, the effect of orte_output_* and orte_show_help will be identical to their opal counterparts (the additional information passed to orte_output_open() will be lost!). Indeed, the orte_* functions simply become trivial wrappers to their opal_* counterparts. Note that we have not tested this; the code is simple but it is quite possible that we mucked something up. = Filter Framework = Messages sent view the new orte_* functions described above and messages output via the IOF on the HNP will now optionally be passed through a new "filter" framework before being output to stdout/stderr. The "filter" OPAL MCA framework is intended to allow preprocessing to messages before they are sent to their final destinations. The first component that was written in the filter framework was to create an XML stream, segregating all the messages into different XML tags, etc. This will allow 3rd party tools to read the stdout/stderr from the HNP and be able to know exactly what each text message is (e.g., a help message, another OMPI infrastructure message, stdout from the user process, stderr from the user process, etc.). Filtering is not active by default. Filter components must be specifically requested, such as: {{{ $ mpirun --mca filter xml ... }}} There can only be one filter component active. = New MCA Parameters = The new functionality described above introduces two new MCA parameters: * '''orte_base_help_aggregate''': Defaults to 1 (true), meaning that help messages will be aggregated, as described above. If set to 0, all help messages will be displayed, even if they are duplicates (i.e., the original behavior). * '''orte_base_show_output_recursions''': An MCA parameter to help debug one of the known issues, described below. It is likely that this MCA parameter will disappear before v1.3 final. = Known Issues = * The XML filter component is not complete. The current output from this component is preliminary and not real XML. A bit more work needs to be done to configure.m4 search for an appropriate XML library/link it in/use it at run time. * There are possible recursion loops in the orte_output() and orte_show_help() functions -- e.g., if RML send calls orte_output() or orte_show_help(). We have some ideas how to fix these, but figured that it was ok to commit before feature freeze with known issues. The code currently contains sub-optimal workarounds so that this will not be a problem, but it would be good to actually solve the problem rather than have hackish workarounds before v1.3 final. This commit was SVN r18434.
2008-05-14 00:00:55 +04:00
#include "orte/mca/errmgr/errmgr.h"
#include "orte/mca/rml/rml.h"
#include "orte/util/proc_info.h"
#include "orte/util/name_fns.h"
#include "orte/runtime/runtime.h"
#include "orte/runtime/runtime_internals.h"
#include "orte/runtime/orte_globals.h"
/* need the data type support functions here */
#include "orte/runtime/data_type_support/orte_dt_support.h"
/* State Machine */
opal_list_t orte_job_states;
opal_list_t orte_proc_states;
/* a clean output channel without prefix */
int orte_clean_output = -1;
/* globals used by RTE */
bool orte_timing;
FILE *orte_timing_output = NULL;
bool orte_timing_details;
bool orte_debug_daemons_file_flag = false;
bool orte_leave_session_attached;
bool orte_do_not_launch = false;
bool orted_spin_flag = false;
char *orte_local_cpu_type = NULL;
char *orte_local_cpu_model = NULL;
Start reducing our dependency on the event library by removing at least one instance where we use it to redirect the program counter. Rolf reported occasional hangs of mpirun in very specific circumstances after all daemons were done. A review of MTT results indicates this may have been happening more generally in a small fraction of cases. The problem was tracked to use of the grpcomm.onesided_barrier to control daemon/mpirun termination. This relied on messaging -and- required that the program counter jump from the errmgr back to grpcomm. On rare occasions, this jump did not occur, causing mpirun to hang. This patch looks more invasive than it is - most of the affected files simply had one or two lines removed. The essence of the change is: * pulled the job_complete and quit routines out of orterun and orted_main and put them in a common place * modified the errmgr to directly call the new routines when termination is detected * removed the grpcomm.onesided_barrier and its associated RML tag * add a new "num_routes" API to the routed framework that reports back the number of dependent routes. When route_lost is called, the daemon's list of "children" is checked and adjusted if that route went to a "leaf" in the routing tree * use connection termination between daemons to track rollup of the daemon tree. Daemons and HNP now terminate once num_routes returns zero Also picked up in this commit is the addition of a new bool flag to the app_context struct, and increasing the job_control field from 8 to 16 bits. Both trivial. This commit was SVN r23429.
2010-07-18 01:03:27 +04:00
char *orte_basename = NULL;
**************************************************************** This change contains a non-mandatory modification of the MPI-RTE interface. Anyone wishing to support coprocessors such as the Xeon Phi may wish to add the required definition and underlying support **************************************************************** Add locality support for coprocessors such as the Intel Xeon Phi. Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host. So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following: 1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board 2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions 3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future. 4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time. 5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored. 6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set. cmr:v1.7.4:reviewer=hjelmn This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
bool orte_coprocessors_detected = false;
opal_hash_table_t *orte_coprocessors = NULL;
/* ORTE OOB port flags */
bool orte_static_ports = false;
char *orte_oob_static_ports = NULL;
bool orte_standalone_operation = false;
bool orte_keep_fqdn_hostnames = false;
bool orte_have_fqdn_allocation = false;
bool orte_show_resolved_nodenames;
bool orte_retain_aliases;
int orte_use_hostname_alias;
int orted_debug_failure;
int orted_debug_failure_delay;
bool orte_homogeneous_nodes = false;
bool orte_hetero_apps = false;
bool orte_hetero_nodes = false;
bool orte_never_launched = false;
bool orte_devel_level_output = false;
bool orte_display_topo_with_map = false;
bool orte_display_diffable_output = false;
char **orte_launch_environ;
bool orte_hnp_is_allocated = false;
bool orte_allocation_required;
bool orte_managed_allocation = false;
char *orte_set_slots = NULL;
bool orte_display_allocation;
bool orte_display_devel_allocation;
bool orte_soft_locations = false;
/* launch agents */
char *orte_launch_agent = NULL;
char **orted_cmd_line=NULL;
char **orte_fork_agent=NULL;
/* debugger job */
bool orte_debugger_dump_proctable;
char *orte_debugger_test_daemon;
bool orte_debugger_test_attach;
int orte_debugger_check_rate;
Start reducing our dependency on the event library by removing at least one instance where we use it to redirect the program counter. Rolf reported occasional hangs of mpirun in very specific circumstances after all daemons were done. A review of MTT results indicates this may have been happening more generally in a small fraction of cases. The problem was tracked to use of the grpcomm.onesided_barrier to control daemon/mpirun termination. This relied on messaging -and- required that the program counter jump from the errmgr back to grpcomm. On rare occasions, this jump did not occur, causing mpirun to hang. This patch looks more invasive than it is - most of the affected files simply had one or two lines removed. The essence of the change is: * pulled the job_complete and quit routines out of orterun and orted_main and put them in a common place * modified the errmgr to directly call the new routines when termination is detected * removed the grpcomm.onesided_barrier and its associated RML tag * add a new "num_routes" API to the routed framework that reports back the number of dependent routes. When route_lost is called, the daemon's list of "children" is checked and adjusted if that route went to a "leaf" in the routing tree * use connection termination between daemons to track rollup of the daemon tree. Daemons and HNP now terminate once num_routes returns zero Also picked up in this commit is the addition of a new bool flag to the app_context struct, and increasing the job_control field from 8 to 16 bits. Both trivial. This commit was SVN r23429.
2010-07-18 01:03:27 +04:00
/* exit flags */
int orte_exit_status = 0;
bool orte_abnormal_term_ordered = false;
bool orte_routing_is_enabled = true;
bool orte_job_term_ordered = false;
bool orte_orteds_term_ordered = false;
bool orte_allowed_exit_without_sync = false;
int orte_startup_timeout;
int orte_timeout_usec_per_proc;
float orte_max_timeout;
orte_timer_t *orte_mpiexec_timeout = NULL;
opal_buffer_t *orte_tree_launch_cmd = NULL;
/* global arrays for data storage */
opal_pointer_array_t *orte_job_data;
opal_pointer_array_t *orte_node_pool;
opal_pointer_array_t *orte_node_topologies;
opal_pointer_array_t *orte_local_children;
orte_vpid_t orte_total_procs = 0;
/* IOF controls */
bool orte_tag_output;
bool orte_timestamp_output;
char *orte_output_filename;
/* generate new xterm windows to display output from specified ranks */
char *orte_xterm;
/* whether or not to forward SIGTSTP and SIGCONT signals */
bool orte_forward_job_control;
/* report launch progress */
bool orte_report_launch_progress = false;
/* allocation specification */
char *orte_default_hostfile = NULL;
bool orte_default_hostfile_given = false;
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
char *orte_rankfile = NULL;
int orte_num_allocated_nodes = 0;
char *orte_node_regex = NULL;
/* tool communication controls */
bool orte_report_events = false;
char *orte_report_events_uri = NULL;
/* report bindings */
bool orte_report_bindings = false;
/* barrier control */
bool orte_do_not_barrier = false;
/* process recovery */
bool orte_enable_recovery;
int32_t orte_max_restarts;
/* exit status reporting */
bool orte_report_child_jobs_separately;
struct timeval orte_child_time_to_exit;
bool orte_abort_non_zero_exit;
/* length of stat history to keep */
int orte_stat_history_size;
/* envars to forward */
char *orte_forward_envars = NULL;
char **orte_forwarded_envars = NULL;
/* map-reduce mode */
bool orte_map_reduce = false;
bool orte_staged_execution = false;
/* map stddiag output to stderr so it isn't forwarded to mpirun */
bool orte_map_stddiag_to_stderr = false;
/* maximum size of virtual machine - used to subdivide allocation */
int orte_max_vm_size = -1;
/* progress thread */
opal_thread_t orte_progress_thread;
/* global nidmap/pidmap for daemons to give to apps */
opal_byte_object_t orte_nidmap;
opal_byte_object_t orte_pidmap;
MCA/base: Add new MCA variable system Features: - Support for an override parameter file (openmpi-mca-param-override.conf). Variable values in this file can not be overridden by any file or environment value. - Support for boolean, unsigned, and unsigned long long variables. - Support for true/false values. - Support for enumerations on integer variables. - Support for MPIT scope, verbosity, and binding. - Support for command line source. - Support for setting variable source via the environment using OMPI_MCA_SOURCE_<var name>=source (either command or file:filename) - Cleaner API. - Support for variable groups (equivalent to MPIT categories). Notes: - Variables must be created with a backing store (char **, int *, or bool *) that must live at least as long as the variable. - Creating a variable with the MCA_BASE_VAR_FLAG_SETTABLE enables the use of mca_base_var_set_value() to change the value. - String values are duplicated when the variable is registered. It is up to the caller to free the original value if necessary. The new value will be freed by the mca_base_var system and must not be freed by the user. - Variables with constant scope may not be settable. - Variable groups (and all associated variables) are deregistered when the component is closed or the component repository item is freed. This prevents a segmentation fault from accessing a variable after its component is unloaded. - After some discussion we decided we should remove the automatic registration of component priority variables. Few component actually made use of this feature. - The enumerator interface was updated to be general enough to handle future uses of the interface. - The code to generate ompi_info output has been moved into the MCA variable system. See mca_base_var_dump(). opal: update core and components to mca_base_var system orte: update core and components to mca_base_var system ompi: update core and components to mca_base_var system This commit also modifies the rmaps framework. The following variables were moved from ppr and lama: rmaps_base_pernode, rmaps_base_n_pernode, rmaps_base_n_persocket. Both lama and ppr create synonyms for these variables. This commit was SVN r28236.
2013-03-28 01:09:41 +04:00
/* user debugger */
char *orte_base_user_debugger = NULL;
int orte_debug_output = -1;
bool orte_debug_daemons_flag = false;
bool orte_xml_output = false;
FILE *orte_xml_fp = NULL;
char *orte_job_ident = NULL;
bool orte_execute_quiet = false;
bool orte_report_silent_errors = false;
/* See comment in orte/tools/orterun/debuggers.c about this MCA
param */
bool orte_in_parallel_debugger = false;
char *orte_daemon_cores = NULL;
int orte_dt_init(void)
{
int rc;
opal_data_type_t tmp;
/* set default output */
orte_debug_output = opal_output_open(NULL);
/* open up the verbose output for ORTE debugging */
if (orte_debug_flag || 0 < orte_debug_verbosity ||
(orte_debug_daemons_flag && (ORTE_PROC_IS_DAEMON || ORTE_PROC_IS_HNP))) {
if (0 < orte_debug_verbosity) {
opal_output_set_verbosity(orte_debug_output, orte_debug_verbosity);
} else {
opal_output_set_verbosity(orte_debug_output, 1);
}
}
/** register the base system types with the DSS */
tmp = ORTE_STD_CNTR;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_std_cntr,
orte_dt_unpack_std_cntr,
(opal_dss_copy_fn_t)orte_dt_copy_std_cntr,
(opal_dss_compare_fn_t)orte_dt_compare_std_cntr,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_STD_CNTR", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_NAME;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_name,
orte_dt_unpack_name,
(opal_dss_copy_fn_t)orte_dt_copy_name,
(opal_dss_compare_fn_t)orte_dt_compare_name,
(opal_dss_print_fn_t)orte_dt_print_name,
OPAL_DSS_UNSTRUCTURED,
"ORTE_NAME", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_VPID;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_vpid,
orte_dt_unpack_vpid,
(opal_dss_copy_fn_t)orte_dt_copy_vpid,
(opal_dss_compare_fn_t)orte_dt_compare_vpid,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_VPID", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_JOBID;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_jobid,
orte_dt_unpack_jobid,
(opal_dss_copy_fn_t)orte_dt_copy_jobid,
(opal_dss_compare_fn_t)orte_dt_compare_jobid,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_JOBID", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_JOB;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_job,
orte_dt_unpack_job,
(opal_dss_copy_fn_t)orte_dt_copy_job,
(opal_dss_compare_fn_t)orte_dt_compare_job,
(opal_dss_print_fn_t)orte_dt_print_job,
OPAL_DSS_STRUCTURED,
"ORTE_JOB", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_NODE;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_node,
orte_dt_unpack_node,
(opal_dss_copy_fn_t)orte_dt_copy_node,
(opal_dss_compare_fn_t)orte_dt_compare_node,
(opal_dss_print_fn_t)orte_dt_print_node,
OPAL_DSS_STRUCTURED,
"ORTE_NODE", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_PROC;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_proc,
orte_dt_unpack_proc,
(opal_dss_copy_fn_t)orte_dt_copy_proc,
(opal_dss_compare_fn_t)orte_dt_compare_proc,
(opal_dss_print_fn_t)orte_dt_print_proc,
OPAL_DSS_STRUCTURED,
"ORTE_PROC", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_APP_CONTEXT;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_app_context,
orte_dt_unpack_app_context,
(opal_dss_copy_fn_t)orte_dt_copy_app_context,
(opal_dss_compare_fn_t)orte_dt_compare_app_context,
(opal_dss_print_fn_t)orte_dt_print_app_context,
OPAL_DSS_STRUCTURED,
"ORTE_APP_CONTEXT", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_NODE_STATE;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_node_state,
orte_dt_unpack_node_state,
(opal_dss_copy_fn_t)orte_dt_copy_node_state,
(opal_dss_compare_fn_t)orte_dt_compare_node_state,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_NODE_STATE", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_PROC_STATE;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_proc_state,
orte_dt_unpack_proc_state,
(opal_dss_copy_fn_t)orte_dt_copy_proc_state,
(opal_dss_compare_fn_t)orte_dt_compare_proc_state,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_PROC_STATE", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_JOB_STATE;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_job_state,
orte_dt_unpack_job_state,
(opal_dss_copy_fn_t)orte_dt_copy_job_state,
(opal_dss_compare_fn_t)orte_dt_compare_job_state,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_JOB_STATE", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_EXIT_CODE;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_exit_code,
orte_dt_unpack_exit_code,
(opal_dss_copy_fn_t)orte_dt_copy_exit_code,
(opal_dss_compare_fn_t)orte_dt_compare_exit_code,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_EXIT_CODE", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_JOB_MAP;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_map,
orte_dt_unpack_map,
(opal_dss_copy_fn_t)orte_dt_copy_map,
(opal_dss_compare_fn_t)orte_dt_compare_map,
(opal_dss_print_fn_t)orte_dt_print_map,
OPAL_DSS_STRUCTURED,
"ORTE_JOB_MAP", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_RML_TAG;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_tag,
orte_dt_unpack_tag,
(opal_dss_copy_fn_t)orte_dt_copy_tag,
(opal_dss_compare_fn_t)orte_dt_compare_tags,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_RML_TAG", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_DAEMON_CMD;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_daemon_cmd,
orte_dt_unpack_daemon_cmd,
(opal_dss_copy_fn_t)orte_dt_copy_daemon_cmd,
(opal_dss_compare_fn_t)orte_dt_compare_daemon_cmd,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_DAEMON_CMD", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
tmp = ORTE_IOF_TAG;
if (ORTE_SUCCESS != (rc = opal_dss.register_type(orte_dt_pack_iof_tag,
orte_dt_unpack_iof_tag,
(opal_dss_copy_fn_t)orte_dt_copy_iof_tag,
(opal_dss_compare_fn_t)orte_dt_compare_iof_tag,
(opal_dss_print_fn_t)orte_dt_std_print,
OPAL_DSS_UNSTRUCTURED,
"ORTE_IOF_TAG", &tmp))) {
ORTE_ERROR_LOG(rc);
return rc;
}
return ORTE_SUCCESS;
}
orte_job_t* orte_get_job_data_object(orte_jobid_t job)
{
int32_t ljob;
/* if the job data wasn't setup, we cannot provide the data */
if (NULL == orte_job_data) {
return NULL;
}
/* the job is indexed by its local jobid, so we can
* just look it up here. it is not an error for this
* to not be found - could just be
* a race condition whereby the job has already been
* removed from the array. The get_item function
* will just return NULL in that case.
*/
ljob = ORTE_LOCAL_JOBID(job);
return (orte_job_t*)opal_pointer_array_get_item(orte_job_data, ljob);
}
orte_proc_t* orte_get_proc_object(orte_process_name_t *proc)
{
orte_job_t *jdata;
orte_proc_t *proct;
if (NULL == (jdata = orte_get_job_data_object(proc->jobid))) {
return NULL;
}
proct = (orte_proc_t*)opal_pointer_array_get_item(jdata->procs, proc->vpid);
return proct;
}
orte_vpid_t orte_get_proc_daemon_vpid(orte_process_name_t *proc)
{
orte_job_t *jdata;
orte_proc_t *proct;
if (NULL == (jdata = orte_get_job_data_object(proc->jobid))) {
return ORTE_VPID_INVALID;
}
if (NULL == (proct = (orte_proc_t*)opal_pointer_array_get_item(jdata->procs, proc->vpid))) {
return ORTE_VPID_INVALID;
}
if (NULL == proct->node || NULL == proct->node->daemon) {
return ORTE_VPID_INVALID;
}
return proct->node->daemon->name.vpid;
}
char* orte_get_proc_hostname(orte_process_name_t *proc)
{
orte_proc_t *proct;
char *hostname;
int rc;
opal_list_t myvals;
opal_value_t *kv;
if (ORTE_PROC_IS_DAEMON || ORTE_PROC_IS_HNP) {
/* look it up on our arrays */
if (NULL == (proct = orte_get_proc_object(proc))) {
ORTE_ERROR_LOG(ORTE_ERR_NOT_FOUND);
return NULL;
}
if (NULL == proct->node || NULL == proct->node->name) {
ORTE_ERROR_LOG(ORTE_ERR_NOT_FOUND);
return NULL;
}
return proct->node->name;
}
/* if we are an app, get the data from the modex db */
OBJ_CONSTRUCT(&myvals, opal_list_t);
if (ORTE_SUCCESS != (rc = opal_dstore.fetch(opal_dstore_internal,
(opal_identifier_t*)proc,
ORTE_DB_HOSTNAME,
&myvals))) {
ORTE_ERROR_LOG(rc);
OPAL_LIST_DESTRUCT(&myvals);
return NULL;
}
kv = (opal_value_t*)opal_list_get_first(&myvals);
hostname = kv->data.string;
/* protect the data */
kv->data.string = NULL;
OPAL_LIST_DESTRUCT(&myvals);
/* user is responsible for releasing the data */
return hostname;
}
orte_node_rank_t orte_get_proc_node_rank(orte_process_name_t *proc)
{
orte_proc_t *proct;
orte_node_rank_t noderank;
int rc;
opal_list_t myvals;
opal_value_t *kv;
if (ORTE_PROC_IS_DAEMON || ORTE_PROC_IS_HNP) {
/* look it up on our arrays */
if (NULL == (proct = orte_get_proc_object(proc))) {
ORTE_ERROR_LOG(ORTE_ERR_NOT_FOUND);
return ORTE_NODE_RANK_INVALID;
}
return proct->node_rank;
}
/* if we are an app, get the value from the modex db */
OBJ_CONSTRUCT(&myvals, opal_list_t);
if (ORTE_SUCCESS != (rc = opal_dstore.fetch(opal_dstore_internal,
(opal_identifier_t*)proc,
ORTE_DB_NODERANK,
&myvals))) {
ORTE_ERROR_LOG(rc);
OPAL_LIST_DESTRUCT(&myvals);
return ORTE_NODE_RANK_INVALID;
}
kv = (opal_value_t*)opal_list_get_first(&myvals);
noderank = kv->data.uint16;
OPAL_LIST_DESTRUCT(&myvals);
return noderank;
}
orte_vpid_t orte_get_lowest_vpid_alive(orte_jobid_t job)
{
int i;
orte_job_t *jdata;
orte_proc_t *proc;
if (NULL == (jdata = orte_get_job_data_object(job))) {
return ORTE_VPID_INVALID;
}
if (ORTE_PROC_IS_DAEMON &&
ORTE_PROC_MY_NAME->jobid == job &&
NULL != orte_process_info.my_hnp_uri) {
/* if we were started by an HNP, then the lowest vpid
* is always 1
*/
return 1;
}
for (i=0; i < jdata->procs->size; i++) {
if (NULL == (proc = (orte_proc_t*)opal_pointer_array_get_item(jdata->procs, i))) {
continue;
}
if (proc->state == ORTE_PROC_STATE_RUNNING) {
/* must be lowest one alive */
return proc->name.vpid;
}
}
/* only get here if no live proc found */
return ORTE_VPID_INVALID;
}
/*
* CONSTRUCTORS, DESTRUCTORS, AND CLASS INSTANTIATIONS
* FOR ORTE CLASSES
*/
static void orte_app_context_construct(orte_app_context_t* app_context)
{
app_context->idx=0;
app_context->app=NULL;
app_context->num_procs=0;
OBJ_CONSTRUCT(&app_context->procs, opal_pointer_array_t);
opal_pointer_array_init(&app_context->procs,
1,
ORTE_GLOBAL_ARRAY_MAX_SIZE,
16);
app_context->state = ORTE_APP_STATE_UNDEF;
app_context->first_rank = 0;
app_context->argv=NULL;
app_context->env=NULL;
app_context->cwd=NULL;
app_context->user_specified_cwd=false;
app_context->set_cwd_to_session_dir = false;
app_context->hostfile=NULL;
app_context->add_hostfile=NULL;
app_context->add_host = NULL;
app_context->dash_host = NULL;
OBJ_CONSTRUCT(&app_context->resource_constraints, opal_list_t);
app_context->prefix_dir = NULL;
app_context->preload_binary = false;
app_context->preload_files = NULL;
app_context->used_on_node = false;
A number of C/R enhancements per RFC below: http://www.open-mpi.org/community/lists/devel/2010/07/8240.php Documentation: http://osl.iu.edu/research/ft/ Major Changes: -------------- * Added C/R-enabled Debugging support. Enabled with the --enable-crdebug flag. See the following website for more information: http://osl.iu.edu/research/ft/crdebug/ * Added Stable Storage (SStore) framework for checkpoint storage * 'central' component does a direct to central storage save * 'stage' component stages checkpoints to central storage while the application continues execution. * 'stage' supports offline compression of checkpoints before moving (sstore_stage_compress) * 'stage' supports local caching of checkpoints to improve automatic recovery (sstore_stage_caching) * Added Compression (compress) framework to support * Add two new ErrMgr recovery policies * {{{crmig}}} C/R Process Migration * {{{autor}}} C/R Automatic Recovery * Added the {{{ompi-migrate}}} command line tool to support the {{{crmig}}} ErrMgr component * Added CR MPI Ext functions (enable them with {{{--enable-mpi-ext=cr}}} configure option) * {{{OMPI_CR_Checkpoint}}} (Fixes trac:2342) * {{{OMPI_CR_Restart}}} * {{{OMPI_CR_Migrate}}} (may need some more work for mapping rules) * {{{OMPI_CR_INC_register_callback}}} (Fixes trac:2192) * {{{OMPI_CR_Quiesce_start}}} * {{{OMPI_CR_Quiesce_checkpoint}}} * {{{OMPI_CR_Quiesce_end}}} * {{{OMPI_CR_self_register_checkpoint_callback}}} * {{{OMPI_CR_self_register_restart_callback}}} * {{{OMPI_CR_self_register_continue_callback}}} * The ErrMgr predicted_fault() interface has been changed to take an opal_list_t of ErrMgr defined types. This will allow us to better support a wider range of fault prediction services in the future. * Add a progress meter to: * FileM rsh (filem_rsh_process_meter) * SnapC full (snapc_full_progress_meter) * SStore stage (sstore_stage_progress_meter) * Added 2 new command line options to ompi-restart * --showme : Display the full command line that would have been exec'ed. * --mpirun_opts : Command line options to pass directly to mpirun. (Fixes trac:2413) * Deprecated some MCA params: * crs_base_snapshot_dir deprecated, use sstore_stage_local_snapshot_dir * snapc_base_global_snapshot_dir deprecated, use sstore_base_global_snapshot_dir * snapc_base_global_shared deprecated, use sstore_stage_global_is_shared * snapc_base_store_in_place deprecated, replaced with different components of SStore * snapc_base_global_snapshot_ref deprecated, use sstore_base_global_snapshot_ref * snapc_base_establish_global_snapshot_dir deprecated, never well supported * snapc_full_skip_filem deprecated, use sstore_stage_skip_filem Minor Changes: -------------- * Fixes trac:1924 : {{{ompi-restart}}} now recognizes path prefixed checkpoint handles and does the right thing. * Fixes trac:2097 : {{{ompi-info}}} should now report all available CRS components * Fixes trac:2161 : Manual checkpoint movement. A user can 'mv' a checkpoint directory from the original location to another and still restart from it. * Fixes trac:2208 : Honor various TMPDIR varaibles instead of forcing {{{/tmp}}} * Move {{{ompi_cr_continue_like_restart}}} to {{{orte_cr_continue_like_restart}}} to be more flexible in where this should be set. * opal_crs_base_metadata_write* functions have been moved to SStore to support a wider range of metadata handling functionality. * Cleanup the CRS framework and components to work with the SStore framework. * Cleanup the SnapC framework and components to work with the SStore framework (cleans up these code paths considerably). * Add 'quiesce' hook to CRCP for a future enhancement. * We now require a BLCR version that supports {{{cr_request_file()}}} or {{{cr_request_checkpoint()}}} in order to make the code more maintainable. Note that {{{cr_request_file}}} has been deprecated since 0.7.0, so we prefer to use {{{cr_request_checkpoint()}}}. * Add optional application level INC callbacks (registered through the CR MPI Ext interface). * Increase the {{{opal_cr_thread_sleep_wait}}} parameter to 1000 microseconds to make the C/R thread less aggressive. * {{{opal-restart}}} now looks for cache directories before falling back on stable storage when asked. * {{{opal-restart}}} also support local decompression before restarting * {{{orte-checkpoint}}} now uses the SStore framework to work with the metadata * {{{orte-restart}}} now uses the SStore framework to work with the metadata * Remove the {{{orte-restart}}} preload option. This was removed since the user only needs to select the 'stage' component in order to support this functionality. * Since the '-am' parameter is saved in the metadata, {{{ompi-restart}}} no longer hard codes {{{-am ft-enable-cr}}}. * Fix {{{hnp}}} ErrMgr so that if a previous component in the stack has 'fixed' the problem, then it should be skipped. * Make sure to decrement the number of 'num_local_procs' in the orted when one goes away. * odls now checks the SStore framework to see if it needs to load any checkpoint files before launching (to support 'stage'). This separates the SStore logic from the --preload-[binary|files] options. * Add unique IDs to the named pipes established between the orted and the app in SnapC. This is to better support migration and automatic recovery activities. * Improve the checks for 'already checkpointing' error path. * A a recovery output timer, to show how long it takes to restart a job * Do a better job of cleaning up the old session directory on restart. * Add a local module to the autor and crmig ErrMgr components. These small modules prevent the 'orted' component from attempting a local recovery (Which does not work for MPI apps at the moment) * Add a fix for bounding the checkpointable region between MPI_Init and MPI_Finalize. This commit was SVN r23587. The following Trac tickets were found above: Ticket 1924 --> https://svn.open-mpi.org/trac/ompi/ticket/1924 Ticket 2097 --> https://svn.open-mpi.org/trac/ompi/ticket/2097 Ticket 2161 --> https://svn.open-mpi.org/trac/ompi/ticket/2161 Ticket 2192 --> https://svn.open-mpi.org/trac/ompi/ticket/2192 Ticket 2208 --> https://svn.open-mpi.org/trac/ompi/ticket/2208 Ticket 2342 --> https://svn.open-mpi.org/trac/ompi/ticket/2342 Ticket 2413 --> https://svn.open-mpi.org/trac/ompi/ticket/2413
2010-08-11 00:51:11 +04:00
#if OPAL_ENABLE_FT_CR == 1
app_context->sstore_load = NULL;
#endif
app_context->recovery_defined = false;
app_context->max_restarts = -1000;
app_context->max_procs_per_node = 0;
app_context->mandatory = false;
app_context->min_number_of_nodes = -1; /* no minimum */
}
static void orte_app_context_destructor(orte_app_context_t* app_context)
{
opal_list_item_t *item;
int i;
orte_proc_t *proc;
if (NULL != app_context->app) {
free (app_context->app);
app_context->app = NULL;
}
for (i=0; i < app_context->procs.size; i++) {
if (NULL != (proc = (orte_proc_t*)opal_pointer_array_get_item(&app_context->procs, i))) {
OBJ_RELEASE(proc);
}
}
OBJ_DESTRUCT(&app_context->procs);
/* argv and env lists created by util/argv copy functions */
if (NULL != app_context->argv) {
opal_argv_free(app_context->argv);
app_context->argv = NULL;
}
if (NULL != app_context->env) {
opal_argv_free(app_context->env);
app_context->env = NULL;
}
if (NULL != app_context->cwd) {
free (app_context->cwd);
app_context->cwd = NULL;
}
if (NULL != app_context->hostfile) {
free(app_context->hostfile);
app_context->hostfile = NULL;
}
if (NULL != app_context->add_hostfile) {
free(app_context->add_hostfile);
app_context->add_hostfile = NULL;
}
if (NULL != app_context->add_host) {
opal_argv_free(app_context->add_host);
app_context->add_host = NULL;
}
if (NULL != app_context->dash_host) {
opal_argv_free(app_context->dash_host);
app_context->dash_host = NULL;
}
while (NULL != (item = opal_list_remove_first(&app_context->resource_constraints))) {
OBJ_RELEASE(item);
}
OBJ_DESTRUCT(&app_context->resource_constraints);
if (NULL != app_context->prefix_dir) {
free(app_context->prefix_dir);
app_context->prefix_dir = NULL;
}
app_context->preload_binary = false;
if(NULL != app_context->preload_files) {
free(app_context->preload_files);
app_context->preload_files = NULL;
}
A number of C/R enhancements per RFC below: http://www.open-mpi.org/community/lists/devel/2010/07/8240.php Documentation: http://osl.iu.edu/research/ft/ Major Changes: -------------- * Added C/R-enabled Debugging support. Enabled with the --enable-crdebug flag. See the following website for more information: http://osl.iu.edu/research/ft/crdebug/ * Added Stable Storage (SStore) framework for checkpoint storage * 'central' component does a direct to central storage save * 'stage' component stages checkpoints to central storage while the application continues execution. * 'stage' supports offline compression of checkpoints before moving (sstore_stage_compress) * 'stage' supports local caching of checkpoints to improve automatic recovery (sstore_stage_caching) * Added Compression (compress) framework to support * Add two new ErrMgr recovery policies * {{{crmig}}} C/R Process Migration * {{{autor}}} C/R Automatic Recovery * Added the {{{ompi-migrate}}} command line tool to support the {{{crmig}}} ErrMgr component * Added CR MPI Ext functions (enable them with {{{--enable-mpi-ext=cr}}} configure option) * {{{OMPI_CR_Checkpoint}}} (Fixes trac:2342) * {{{OMPI_CR_Restart}}} * {{{OMPI_CR_Migrate}}} (may need some more work for mapping rules) * {{{OMPI_CR_INC_register_callback}}} (Fixes trac:2192) * {{{OMPI_CR_Quiesce_start}}} * {{{OMPI_CR_Quiesce_checkpoint}}} * {{{OMPI_CR_Quiesce_end}}} * {{{OMPI_CR_self_register_checkpoint_callback}}} * {{{OMPI_CR_self_register_restart_callback}}} * {{{OMPI_CR_self_register_continue_callback}}} * The ErrMgr predicted_fault() interface has been changed to take an opal_list_t of ErrMgr defined types. This will allow us to better support a wider range of fault prediction services in the future. * Add a progress meter to: * FileM rsh (filem_rsh_process_meter) * SnapC full (snapc_full_progress_meter) * SStore stage (sstore_stage_progress_meter) * Added 2 new command line options to ompi-restart * --showme : Display the full command line that would have been exec'ed. * --mpirun_opts : Command line options to pass directly to mpirun. (Fixes trac:2413) * Deprecated some MCA params: * crs_base_snapshot_dir deprecated, use sstore_stage_local_snapshot_dir * snapc_base_global_snapshot_dir deprecated, use sstore_base_global_snapshot_dir * snapc_base_global_shared deprecated, use sstore_stage_global_is_shared * snapc_base_store_in_place deprecated, replaced with different components of SStore * snapc_base_global_snapshot_ref deprecated, use sstore_base_global_snapshot_ref * snapc_base_establish_global_snapshot_dir deprecated, never well supported * snapc_full_skip_filem deprecated, use sstore_stage_skip_filem Minor Changes: -------------- * Fixes trac:1924 : {{{ompi-restart}}} now recognizes path prefixed checkpoint handles and does the right thing. * Fixes trac:2097 : {{{ompi-info}}} should now report all available CRS components * Fixes trac:2161 : Manual checkpoint movement. A user can 'mv' a checkpoint directory from the original location to another and still restart from it. * Fixes trac:2208 : Honor various TMPDIR varaibles instead of forcing {{{/tmp}}} * Move {{{ompi_cr_continue_like_restart}}} to {{{orte_cr_continue_like_restart}}} to be more flexible in where this should be set. * opal_crs_base_metadata_write* functions have been moved to SStore to support a wider range of metadata handling functionality. * Cleanup the CRS framework and components to work with the SStore framework. * Cleanup the SnapC framework and components to work with the SStore framework (cleans up these code paths considerably). * Add 'quiesce' hook to CRCP for a future enhancement. * We now require a BLCR version that supports {{{cr_request_file()}}} or {{{cr_request_checkpoint()}}} in order to make the code more maintainable. Note that {{{cr_request_file}}} has been deprecated since 0.7.0, so we prefer to use {{{cr_request_checkpoint()}}}. * Add optional application level INC callbacks (registered through the CR MPI Ext interface). * Increase the {{{opal_cr_thread_sleep_wait}}} parameter to 1000 microseconds to make the C/R thread less aggressive. * {{{opal-restart}}} now looks for cache directories before falling back on stable storage when asked. * {{{opal-restart}}} also support local decompression before restarting * {{{orte-checkpoint}}} now uses the SStore framework to work with the metadata * {{{orte-restart}}} now uses the SStore framework to work with the metadata * Remove the {{{orte-restart}}} preload option. This was removed since the user only needs to select the 'stage' component in order to support this functionality. * Since the '-am' parameter is saved in the metadata, {{{ompi-restart}}} no longer hard codes {{{-am ft-enable-cr}}}. * Fix {{{hnp}}} ErrMgr so that if a previous component in the stack has 'fixed' the problem, then it should be skipped. * Make sure to decrement the number of 'num_local_procs' in the orted when one goes away. * odls now checks the SStore framework to see if it needs to load any checkpoint files before launching (to support 'stage'). This separates the SStore logic from the --preload-[binary|files] options. * Add unique IDs to the named pipes established between the orted and the app in SnapC. This is to better support migration and automatic recovery activities. * Improve the checks for 'already checkpointing' error path. * A a recovery output timer, to show how long it takes to restart a job * Do a better job of cleaning up the old session directory on restart. * Add a local module to the autor and crmig ErrMgr components. These small modules prevent the 'orted' component from attempting a local recovery (Which does not work for MPI apps at the moment) * Add a fix for bounding the checkpointable region between MPI_Init and MPI_Finalize. This commit was SVN r23587. The following Trac tickets were found above: Ticket 1924 --> https://svn.open-mpi.org/trac/ompi/ticket/1924 Ticket 2097 --> https://svn.open-mpi.org/trac/ompi/ticket/2097 Ticket 2161 --> https://svn.open-mpi.org/trac/ompi/ticket/2161 Ticket 2192 --> https://svn.open-mpi.org/trac/ompi/ticket/2192 Ticket 2208 --> https://svn.open-mpi.org/trac/ompi/ticket/2208 Ticket 2342 --> https://svn.open-mpi.org/trac/ompi/ticket/2342 Ticket 2413 --> https://svn.open-mpi.org/trac/ompi/ticket/2413
2010-08-11 00:51:11 +04:00
#if OPAL_ENABLE_FT_CR == 1
if( NULL != app_context->sstore_load ) {
free(app_context->sstore_load);
app_context->sstore_load = NULL;
}
#endif
}
OBJ_CLASS_INSTANCE(orte_app_context_t,
opal_object_t,
orte_app_context_construct,
orte_app_context_destructor);
static void orte_job_construct(orte_job_t* job)
{
job->jobid = ORTE_JOBID_INVALID;
job->offset = 0;
job->updated = true;
job->apps = OBJ_NEW(opal_pointer_array_t);
opal_pointer_array_init(job->apps,
1,
ORTE_GLOBAL_ARRAY_MAX_SIZE,
2);
job->num_apps = 0;
job->controls = ORTE_JOB_CONTROL_FORWARD_OUTPUT;
job->failure_timer = NULL;
job->gang_launched = true;
job->stdin_target = ORTE_VPID_INVALID;
job->stdout_target = ORTE_JOBID_INVALID;
job->total_slots_alloc = 0;
job->num_procs = 0;
job->procs = OBJ_NEW(opal_pointer_array_t);
opal_pointer_array_init(job->procs,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE,
ORTE_GLOBAL_ARRAY_MAX_SIZE,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE);
job->map = NULL;
job->bookmark = NULL;
job->state = ORTE_JOB_STATE_UNDEF;
job->restart = false;
job->num_mapped = 0;
job->num_launched = 0;
job->num_reported = 0;
job->num_terminated = 0;
job->num_daemons_reported = 0;
job->num_non_zero_exit = 0;
job->abort = false;
job->aborted_proc = NULL;
job->originator.jobid = ORTE_JOBID_INVALID;
job->originator.vpid = ORTE_VPID_INVALID;
job->recovery_defined = false;
job->enable_recovery = false;
job->num_local_procs = 0;
job->file_maps = NULL;
#if OPAL_ENABLE_FT_CR == 1
job->ckpt_state = 0;
job->ckpt_snapshot_ref = NULL;
job->ckpt_snapshot_loc = NULL;
#endif
}
static void orte_job_destruct(orte_job_t* job)
{
orte_proc_t *proc;
orte_app_context_t *app;
orte_job_t *jdata;
int n;
if (NULL == job) {
/* probably just a race condition - just return */
return;
}
if (orte_debug_flag) {
opal_output(0, "%s Releasing job data for %s",
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), ORTE_JOBID_PRINT(job->jobid));
}
for (n=0; n < job->apps->size; n++) {
if (NULL == (app = (orte_app_context_t*)opal_pointer_array_get_item(job->apps, n))) {
continue;
}
OBJ_RELEASE(app);
}
OBJ_RELEASE(job->apps);
if (NULL != job->failure_timer) {
OBJ_RELEASE(job->failure_timer);
}
if (NULL != job->map) {
OBJ_RELEASE(job->map);
job->map = NULL;
}
for (n=0; n < job->procs->size; n++) {
if (NULL == (proc = (orte_proc_t*)opal_pointer_array_get_item(job->procs, n))) {
continue;
}
OBJ_RELEASE(proc);
}
OBJ_RELEASE(job->procs);
if (NULL != job->file_maps) {
OBJ_RELEASE(job->file_maps);
}
#if OPAL_ENABLE_FT_CR == 1
if (NULL != job->ckpt_snapshot_ref) {
free(job->ckpt_snapshot_ref);
job->ckpt_snapshot_ref = NULL;
}
if (NULL != job->ckpt_snapshot_loc) {
free(job->ckpt_snapshot_loc);
job->ckpt_snapshot_loc = NULL;
}
#endif
/* find the job in the global array */
if (NULL != orte_job_data) {
for (n=0; n < orte_job_data->size; n++) {
if (NULL == (jdata = (orte_job_t*)opal_pointer_array_get_item(orte_job_data, n))) {
continue;
}
if (jdata->jobid == job->jobid) {
/* set the entry to NULL */
opal_pointer_array_set_item(orte_job_data, n, NULL);
break;
}
}
}
}
OBJ_CLASS_INSTANCE(orte_job_t,
opal_list_item_t,
orte_job_construct,
orte_job_destruct);
static void orte_node_construct(orte_node_t* node)
{
**************************************************************** This change contains a non-mandatory modification of the MPI-RTE interface. Anyone wishing to support coprocessors such as the Xeon Phi may wish to add the required definition and underlying support **************************************************************** Add locality support for coprocessors such as the Intel Xeon Phi. Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host. So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following: 1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board 2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions 3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future. 4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time. 5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored. 6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set. cmr:v1.7.4:reviewer=hjelmn This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
node->index = -1;
node->name = NULL;
node->alias = NULL;
node->serial_number = NULL;
**************************************************************** This change contains a non-mandatory modification of the MPI-RTE interface. Anyone wishing to support coprocessors such as the Xeon Phi may wish to add the required definition and underlying support **************************************************************** Add locality support for coprocessors such as the Intel Xeon Phi. Detecting that we are on a coprocessor inside of a host node isn't straightforward. There are no good "hooks" provided for programmatically detecting that "we are on a coprocessor running its own OS", and the ORTE daemon just thinks it is on another node. However, in order to properly use the Phi's public interface for MPI transport, it is necessary that the daemon detect that it is colocated with procs on the host. So we have to split the locality to separately record "on the same host" vs "on the same board". We already have the board-level locality flag, but not quite enough flexibility to handle this use-case. Thus, do the following: 1. add OPAL_PROC_ON_HOST flag to indicate we share a host, but not necessarily the same board 2. modify OPAL_PROC_ON_NODE to indicate we share both a host AND the same board. Note that we have to modify the OPAL_PROC_ON_LOCAL_NODE macro to explicitly check both conditions 3. add support in opal/mca/hwloc/base/hwloc_base_util.c for the host to check for coprocessors, and for daemons to check to see if they are on a coprocessor. The former is done via hwloc, but support for the latter is not yet provided by hwloc. So the code for detecting we are on a coprocessor currently is Xeon Phi specific - hopefully, we will find more generic methods in the future. 4. modify the orted and the hnp startup so they check for coprocessors and to see if they are on a coprocessor, and have the orteds pass that info back in their callback message. Automatically detect that coprocessors have been found and identify which coprocessors are on which hosts. Note that this algo isn't scalable at the moment - this will hopefully be improved over time. 5. modify the ompi proc locality detection function to look for coprocessor host info IF the OMPI_RTE_HOST_ID database key has been defined. RTE's that choose not to provide this support do not have to do anything - the associated code will simply be ignored. 6. include some cleanup of the hwloc open/close code so it conforms to how we did things in other frameworks (e.g., having a single "frame" file instead of open/close). Also, fix the locality flags - e.g., being on the same node means you must also be on the same cluster/cu, so ensure those flags are also set. cmr:v1.7.4:reviewer=hjelmn This commit was SVN r29435.
2013-10-14 20:52:58 +04:00
node->hostid = ORTE_VPID_INVALID;
node->daemon = NULL;
node->daemon_launched = false;
node->location_verified = false;
node->launch_id = -1;
node->mapped = false;
node->num_procs = 0;
node->procs = OBJ_NEW(opal_pointer_array_t);
opal_pointer_array_init(node->procs,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE,
ORTE_GLOBAL_ARRAY_MAX_SIZE,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE);
node->next_node_rank = 0;
node->oversubscribed = false;
node->state = ORTE_NODE_STATE_UNKNOWN;
node->slots = 0;
node->slots_given = false;
node->slots_inuse = 0;
node->slots_max = 0;
node->username = NULL;
#if OPAL_HAVE_HWLOC
node->topology = NULL;
#endif
OBJ_CONSTRUCT(&node->stats, opal_ring_buffer_t);
opal_ring_buffer_init(&node->stats, orte_stat_history_size);
}
static void orte_node_destruct(orte_node_t* node)
{
int i;
opal_node_stats_t *stats;
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
orte_proc_t *proc;
if (NULL != node->name) {
free(node->name);
node->name = NULL;
}
if (NULL != node->serial_number) {
free(node->serial_number);
node->serial_number = NULL;
}
if (NULL != node->alias) {
opal_argv_free(node->alias);
node->alias = NULL;
}
if (NULL != node->daemon) {
node->daemon->node = NULL;
OBJ_RELEASE(node->daemon);
node->daemon = NULL;
}
for (i=0; i < node->procs->size; i++) {
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
if (NULL != (proc = (orte_proc_t*)opal_pointer_array_get_item(node->procs, i))) {
opal_pointer_array_set_item(node->procs, i, NULL);
OBJ_RELEASE(proc);
}
}
OBJ_RELEASE(node->procs);
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
/* we release the topology elsewhere */
if (NULL != node->username) {
free(node->username);
node->username = NULL;
}
/* do NOT destroy the topology */
while (NULL != (stats = (opal_node_stats_t*)opal_ring_buffer_pop(&node->stats))) {
OBJ_RELEASE(stats);
}
OBJ_DESTRUCT(&node->stats);
}
OBJ_CLASS_INSTANCE(orte_node_t,
opal_list_item_t,
orte_node_construct,
orte_node_destruct);
static void orte_proc_construct(orte_proc_t* proc)
{
proc->name = *ORTE_NAME_INVALID;
proc->pid = 0;
proc->local_rank = ORTE_LOCAL_RANK_INVALID;
proc->node_rank = ORTE_NODE_RANK_INVALID;
proc->app_rank = -1;
proc->last_errmgr_state = ORTE_PROC_STATE_UNDEF;
proc->state = ORTE_PROC_STATE_UNDEF;
proc->alive = false;
proc->aborted = false;
proc->updated = true;
proc->app_idx = 0;
#if OPAL_HAVE_HWLOC
proc->locale = NULL;
proc->bind_location = NULL;
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
proc->cpu_bitmap = NULL;
#endif
proc->node = NULL;
proc->local_proc = false;
proc->do_not_barrier = false;
proc->prior_node = NULL;
proc->nodename = NULL;
proc->exit_code = 0; /* Assume we won't fail unless otherwise notified */
proc->rml_uri = NULL;
proc->restarts = 0;
proc->fast_failures = 0;
proc->last_failure.tv_sec = 0;
proc->last_failure.tv_usec = 0;
proc->reported = false;
proc->beat = 0;
OBJ_CONSTRUCT(&proc->stats, opal_ring_buffer_t);
opal_ring_buffer_init(&proc->stats, orte_stat_history_size);
proc->registered = false;
proc->mpi_proc = false;
proc->deregistered = false;
proc->iof_complete = false;
proc->waitpid_recvd = false;
#if OPAL_ENABLE_FT_CR == 1
proc->ckpt_state = 0;
proc->ckpt_snapshot_ref = NULL;
proc->ckpt_snapshot_loc = NULL;
#endif
}
static void orte_proc_destruct(orte_proc_t* proc)
{
opal_pstats_t *stats;
/* do NOT free the nodename field as this is
* simply a pointer to a field in the
* associated node object - the node object
* will free it
*/
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
#if OPAL_HAVE_HWLOC
if (NULL != proc->cpu_bitmap) {
free(proc->cpu_bitmap);
}
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
#endif
if (NULL != proc->node) {
OBJ_RELEASE(proc->node);
proc->node = NULL;
}
if (NULL != proc->rml_uri) {
free(proc->rml_uri);
proc->rml_uri = NULL;
}
while (NULL != (stats = (opal_pstats_t*)opal_ring_buffer_pop(&proc->stats))) {
OBJ_RELEASE(stats);
}
OBJ_DESTRUCT(&proc->stats);
#if OPAL_ENABLE_FT_CR == 1
if (NULL != proc->ckpt_snapshot_ref) {
free(proc->ckpt_snapshot_ref);
proc->ckpt_snapshot_ref = NULL;
}
if (NULL != proc->ckpt_snapshot_loc) {
free(proc->ckpt_snapshot_loc);
proc->ckpt_snapshot_loc = NULL;
}
#endif
}
OBJ_CLASS_INSTANCE(orte_proc_t,
opal_list_item_t,
orte_proc_construct,
orte_proc_destruct);
static void orte_job_map_construct(orte_job_map_t* map)
{
George raised some valid concerns about the extensibility of the revised rmaps framework. Address those by: 1. removing the enum of mapper values 2. change the req_mapper and last_mapper fields to char* so they can hold the component name instead of a mapper flag 3. revise the selection logic in the mapper components to reflect the change. Components now look for their name in the req_mapper field, or to see if other criteria (e.g., npernode) are set that mandate their doing the mapping Several MCA params resided in the rmaps base for historical reasons - they have been in the base since at least the original 1.2 release (and perhaps earlier). However, George correctly pointed out that they really should reside in their respective components. Accordingly, move them to the components, but register synonyms to the old names to avoid breaking backward compatibility. These revisions retain the current functionality of allowing comm_spawn'd jobs to use different mappers than the original job, and for the errmgr to utilize the resilient mapper to recover processes regardless of how they were originally mapped. Given the large number of possible combinations, I am sure that someone will find a corner-case combination of values and selection criteria that cause either no mapper to be selected, or one other than the intended to be used. No one can test all the ways people will use this system, so I expect debugging to continue for awhile. The ability of comm_spawn'd jobs to exploit this functionality relies on changes to the orte_dpm component - this will be committed separately. This commit was SVN r24520.
2011-03-12 08:30:09 +03:00
map->req_mapper = NULL;
map->last_mapper = NULL;
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
map->mapping = 0;
map->ranking = 0;
#if OPAL_HAVE_HWLOC
map->binding = 0;
#endif
map->ppr = NULL;
map->cpus_per_rank = 1;
map->display_map = false;
map->num_new_daemons = 0;
map->daemon_vpid_start = ORTE_VPID_INVALID;
map->num_nodes = 0;
map->nodes = OBJ_NEW(opal_pointer_array_t);
opal_pointer_array_init(map->nodes,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE,
ORTE_GLOBAL_ARRAY_MAX_SIZE,
ORTE_GLOBAL_ARRAY_BLOCK_SIZE);
}
static void orte_job_map_destruct(orte_job_map_t* map)
{
orte_std_cntr_t i;
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
orte_node_t *node;
George raised some valid concerns about the extensibility of the revised rmaps framework. Address those by: 1. removing the enum of mapper values 2. change the req_mapper and last_mapper fields to char* so they can hold the component name instead of a mapper flag 3. revise the selection logic in the mapper components to reflect the change. Components now look for their name in the req_mapper field, or to see if other criteria (e.g., npernode) are set that mandate their doing the mapping Several MCA params resided in the rmaps base for historical reasons - they have been in the base since at least the original 1.2 release (and perhaps earlier). However, George correctly pointed out that they really should reside in their respective components. Accordingly, move them to the components, but register synonyms to the old names to avoid breaking backward compatibility. These revisions retain the current functionality of allowing comm_spawn'd jobs to use different mappers than the original job, and for the errmgr to utilize the resilient mapper to recover processes regardless of how they were originally mapped. Given the large number of possible combinations, I am sure that someone will find a corner-case combination of values and selection criteria that cause either no mapper to be selected, or one other than the intended to be used. No one can test all the ways people will use this system, so I expect debugging to continue for awhile. The ability of comm_spawn'd jobs to exploit this functionality relies on changes to the orte_dpm component - this will be committed separately. This commit was SVN r24520.
2011-03-12 08:30:09 +03:00
if (NULL != map->req_mapper) {
free(map->req_mapper);
}
if (NULL != map->last_mapper) {
free(map->last_mapper);
}
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
if (NULL != map->ppr) {
free(map->ppr);
}
for (i=0; i < map->nodes->size; i++) {
At long last, the fabled revision to the affinity system has arrived. A more detailed explanation of how this all works will be presented here: https://svn.open-mpi.org/trac/ompi/wiki/ProcessPlacement The wiki page is incomplete at the moment, but I hope to complete it over the next few days. I will provide updates on the devel list. As the wiki page states, the default and most commonly used options remain unchanged (except as noted below). New, esoteric and complex options have been added, but unless you are a true masochist, you are unlikely to use many of them beyond perhaps an initial curiosity-motivated experimentation. In a nutshell, this commit revamps the map/rank/bind procedure to take into account topology info on the compute nodes. I have, for the most part, preserved the default behaviors, with three notable exceptions: 1. I have at long last bowed my head in submission to the system admin's of managed clusters. For years, they have complained about our default of allowing users to oversubscribe nodes - i.e., to run more processes on a node than allocated slots. Accordingly, I have modified the default behavior: if you are running off of hostfile/dash-host allocated nodes, then the default is to allow oversubscription. If you are running off of RM-allocated nodes, then the default is to NOT allow oversubscription. Flags to override these behaviors are provided, so this only affects the default behavior. 2. both cpus/rank and stride have been removed. The latter was demanded by those who didn't understand the purpose behind it - and I agreed as the users who requested it are no longer using it. The former was removed temporarily pending implementation. 3. vm launch is now the sole method for starting OMPI. It was just too darned hard to maintain multiple launch procedures - maybe someday, provided someone can demonstrate a reason to do so. As Jeff stated, it is impossible to fully test a change of this size. I have tested it on Linux and Mac, covering all the default and simple options, singletons, and comm_spawn. That said, I'm sure others will find problems, so I'll be watching MTT results until this stabilizes. This commit was SVN r25476.
2011-11-15 07:40:11 +04:00
if (NULL != (node = (orte_node_t*)opal_pointer_array_get_item(map->nodes, i))) {
OBJ_RELEASE(node);
opal_pointer_array_set_item(map->nodes, i, NULL);
}
}
OBJ_RELEASE(map->nodes);
}
OBJ_CLASS_INSTANCE(orte_job_map_t,
opal_object_t,
orte_job_map_construct,
orte_job_map_destruct);