1
1
openmpi/ompi/runtime/ompi_cr.c
Josh Hursey e12ca48cd9 A number of C/R enhancements per RFC below:
http://www.open-mpi.org/community/lists/devel/2010/07/8240.php

Documentation:
  http://osl.iu.edu/research/ft/

Major Changes: 
-------------- 
 * Added C/R-enabled Debugging support. 
   Enabled with the --enable-crdebug flag. See the following website for more information: 
   http://osl.iu.edu/research/ft/crdebug/ 
 * Added Stable Storage (SStore) framework for checkpoint storage 
   * 'central' component does a direct to central storage save 
   * 'stage' component stages checkpoints to central storage while the application continues execution. 
     * 'stage' supports offline compression of checkpoints before moving (sstore_stage_compress) 
     * 'stage' supports local caching of checkpoints to improve automatic recovery (sstore_stage_caching) 
 * Added Compression (compress) framework to support 
 * Add two new ErrMgr recovery policies 
   * {{{crmig}}} C/R Process Migration 
   * {{{autor}}} C/R Automatic Recovery 
 * Added the {{{ompi-migrate}}} command line tool to support the {{{crmig}}} ErrMgr component 
 * Added CR MPI Ext functions (enable them with {{{--enable-mpi-ext=cr}}} configure option) 
   * {{{OMPI_CR_Checkpoint}}} (Fixes trac:2342) 
   * {{{OMPI_CR_Restart}}} 
   * {{{OMPI_CR_Migrate}}} (may need some more work for mapping rules) 
   * {{{OMPI_CR_INC_register_callback}}} (Fixes trac:2192) 
   * {{{OMPI_CR_Quiesce_start}}} 
   * {{{OMPI_CR_Quiesce_checkpoint}}} 
   * {{{OMPI_CR_Quiesce_end}}} 
   * {{{OMPI_CR_self_register_checkpoint_callback}}} 
   * {{{OMPI_CR_self_register_restart_callback}}} 
   * {{{OMPI_CR_self_register_continue_callback}}} 
 * The ErrMgr predicted_fault() interface has been changed to take an opal_list_t of ErrMgr defined types. This will allow us to better support a wider range of fault prediction services in the future. 
 * Add a progress meter to: 
   * FileM rsh (filem_rsh_process_meter) 
   * SnapC full (snapc_full_progress_meter) 
   * SStore stage (sstore_stage_progress_meter) 
 * Added 2 new command line options to ompi-restart 
   * --showme : Display the full command line that would have been exec'ed. 
   * --mpirun_opts : Command line options to pass directly to mpirun. (Fixes trac:2413) 
 * Deprecated some MCA params: 
   * crs_base_snapshot_dir deprecated, use sstore_stage_local_snapshot_dir 
   * snapc_base_global_snapshot_dir deprecated, use sstore_base_global_snapshot_dir 
   * snapc_base_global_shared deprecated, use sstore_stage_global_is_shared 
   * snapc_base_store_in_place deprecated, replaced with different components of SStore 
   * snapc_base_global_snapshot_ref deprecated, use sstore_base_global_snapshot_ref 
   * snapc_base_establish_global_snapshot_dir deprecated, never well supported 
   * snapc_full_skip_filem deprecated, use sstore_stage_skip_filem 

Minor Changes: 
-------------- 
 * Fixes trac:1924 : {{{ompi-restart}}} now recognizes path prefixed checkpoint handles and does the right thing. 
 * Fixes trac:2097 : {{{ompi-info}}} should now report all available CRS components 
 * Fixes trac:2161 : Manual checkpoint movement. A user can 'mv' a checkpoint directory from the original location to another and still restart from it. 
 * Fixes trac:2208 : Honor various TMPDIR varaibles instead of forcing {{{/tmp}}} 
 * Move {{{ompi_cr_continue_like_restart}}} to {{{orte_cr_continue_like_restart}}} to be more flexible in where this should be set. 
 * opal_crs_base_metadata_write* functions have been moved to SStore to support a wider range of metadata handling functionality. 
 * Cleanup the CRS framework and components to work with the SStore framework. 
 * Cleanup the SnapC framework and components to work with the SStore framework (cleans up these code paths considerably). 
 * Add 'quiesce' hook to CRCP for a future enhancement. 
 * We now require a BLCR version that supports {{{cr_request_file()}}} or {{{cr_request_checkpoint()}}} in order to make the code more maintainable. Note that {{{cr_request_file}}} has been deprecated since 0.7.0, so we prefer to use {{{cr_request_checkpoint()}}}. 
 * Add optional application level INC callbacks (registered through the CR MPI Ext interface). 
 * Increase the {{{opal_cr_thread_sleep_wait}}} parameter to 1000 microseconds to make the C/R thread less aggressive. 
 * {{{opal-restart}}} now looks for cache directories before falling back on stable storage when asked. 
 * {{{opal-restart}}} also support local decompression before restarting 
 * {{{orte-checkpoint}}} now uses the SStore framework to work with the metadata 
 * {{{orte-restart}}} now uses the SStore framework to work with the metadata 
 * Remove the {{{orte-restart}}} preload option. This was removed since the user only needs to select the 'stage' component in order to support this functionality. 
 * Since the '-am' parameter is saved in the metadata, {{{ompi-restart}}} no longer hard codes {{{-am ft-enable-cr}}}. 
 * Fix {{{hnp}}} ErrMgr so that if a previous component in the stack has 'fixed' the problem, then it should be skipped. 
 * Make sure to decrement the number of 'num_local_procs' in the orted when one goes away. 
 * odls now checks the SStore framework to see if it needs to load any checkpoint files before launching (to support 'stage'). This separates the SStore logic from the --preload-[binary|files] options. 
 * Add unique IDs to the named pipes established between the orted and the app in SnapC. This is to better support migration and automatic recovery activities. 
 * Improve the checks for 'already checkpointing' error path. 
 * A a recovery output timer, to show how long it takes to restart a job 
 * Do a better job of cleaning up the old session directory on restart. 
 * Add a local module to the autor and crmig ErrMgr components. These small modules prevent the 'orted' component from attempting a local recovery (Which does not work for MPI apps at the moment) 
 * Add a fix for bounding the checkpointable region between MPI_Init and MPI_Finalize. 

This commit was SVN r23587.

The following Trac tickets were found above:
  Ticket 1924 --> https://svn.open-mpi.org/trac/ompi/ticket/1924
  Ticket 2097 --> https://svn.open-mpi.org/trac/ompi/ticket/2097
  Ticket 2161 --> https://svn.open-mpi.org/trac/ompi/ticket/2161
  Ticket 2192 --> https://svn.open-mpi.org/trac/ompi/ticket/2192
  Ticket 2208 --> https://svn.open-mpi.org/trac/ompi/ticket/2208
  Ticket 2342 --> https://svn.open-mpi.org/trac/ompi/ticket/2342
  Ticket 2413 --> https://svn.open-mpi.org/trac/ompi/ticket/2413
2010-08-10 20:51:11 +00:00

498 строки
14 KiB
C

/* -*- Mode: C; c-basic-offset:4 ; -*- */
/*
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2004-2007 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
/** @file
*
* OMPI Layer Checkpoint/Restart Runtime functions
*
*/
#include "ompi_config.h"
#include <errno.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif /* HAVE_UNISTD_H */
#ifdef HAVE_FCNTL_H
#include <fcntl.h>
#endif /* HAVE_FCNTL_H */
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif /* HAVE_SYS_TYPES_H */
#ifdef HAVE_SYS_STAT_H
#include <sys/stat.h> /* for mkfifo */
#endif /* HAVE_SYS_STAT_H */
#include "opal/event/event.h"
#include "opal/util/output.h"
#include "opal/mca/crs/crs.h"
#include "opal/mca/crs/base/base.h"
#include "opal/mca/installdirs/installdirs.h"
#include "opal/runtime/opal_cr.h"
#include "orte/mca/snapc/snapc.h"
#include "orte/mca/snapc/base/base.h"
#include "ompi/constants.h"
#include "ompi/mca/pml/pml.h"
#include "ompi/mca/pml/base/base.h"
#include "ompi/mca/btl/base/base.h"
#include "ompi/mca/crcp/crcp.h"
#include "ompi/mca/crcp/base/base.h"
#include "ompi/communicator/communicator.h"
#include "ompi/runtime/ompi_cr.h"
#if OPAL_ENABLE_CRDEBUG == 1
#include "orte/runtime/orte_globals.h"
#include "ompi/debuggers/debuggers.h"
#endif
#if OPAL_ENABLE_CRDEBUG == 1
OMPI_DECLSPEC int MPIR_checkpointable = 0;
OMPI_DECLSPEC char * MPIR_controller_hostname = NULL;
OMPI_DECLSPEC char * MPIR_checkpoint_command = NULL;
OMPI_DECLSPEC char * MPIR_restart_command = NULL;
OMPI_DECLSPEC char * MPIR_checkpoint_listing_command = NULL;
#endif
/*************
* Local functions
*************/
static int ompi_cr_coord_pre_ckpt(void);
static int ompi_cr_coord_pre_restart(void);
static int ompi_cr_coord_pre_continue(void);
static int ompi_cr_coord_post_ckpt(void);
static int ompi_cr_coord_post_restart(void);
static int ompi_cr_coord_post_continue(void);
/*************
* Local vars
*************/
static opal_cr_coord_callback_fn_t prev_coord_callback = NULL;
int ompi_cr_output = -1;
#define NUM_COLLECTIVES 16
#define SIGNAL(comm, modules, highest_module, msg, ret, func) \
do { \
bool found = false; \
int k; \
mca_coll_base_module_t *my_module = \
comm->c_coll.coll_ ## func ## _module; \
if (NULL != my_module) { \
for (k = 0 ; k < highest_module ; ++k) { \
if (my_module == modules[k]) found = true; \
} \
if (!found) { \
modules[highest_module++] = my_module; \
if (NULL != my_module->ft_event) { \
ret = my_module->ft_event(msg); \
} \
} \
} \
} while (0)
static int
notify_collectives(int msg)
{
mca_coll_base_module_t *modules[NUM_COLLECTIVES];
int i, max, ret, highest_module = 0;
memset(&modules, 0, sizeof(mca_coll_base_module_t*) * NUM_COLLECTIVES);
max = opal_pointer_array_get_size(&ompi_mpi_communicators);
for (i = 0 ; i < max ; ++i) {
ompi_communicator_t *comm =
(ompi_communicator_t *)opal_pointer_array_get_item(&ompi_mpi_communicators, i);
if (NULL == comm) continue;
SIGNAL(comm, modules, highest_module, msg, ret, allgather);
SIGNAL(comm, modules, highest_module, msg, ret, allgatherv);
SIGNAL(comm, modules, highest_module, msg, ret, allreduce);
SIGNAL(comm, modules, highest_module, msg, ret, alltoall);
SIGNAL(comm, modules, highest_module, msg, ret, alltoallv);
SIGNAL(comm, modules, highest_module, msg, ret, alltoallw);
SIGNAL(comm, modules, highest_module, msg, ret, barrier);
SIGNAL(comm, modules, highest_module, msg, ret, bcast);
SIGNAL(comm, modules, highest_module, msg, ret, exscan);
SIGNAL(comm, modules, highest_module, msg, ret, gather);
SIGNAL(comm, modules, highest_module, msg, ret, gatherv);
SIGNAL(comm, modules, highest_module, msg, ret, reduce);
SIGNAL(comm, modules, highest_module, msg, ret, reduce_scatter);
SIGNAL(comm, modules, highest_module, msg, ret, scan);
SIGNAL(comm, modules, highest_module, msg, ret, scatter);
SIGNAL(comm, modules, highest_module, msg, ret, scatterv);
}
return OMPI_SUCCESS;
}
/*
* CR Init
*/
int ompi_cr_init(void)
{
int val;
/*
* Register some MCA parameters
*/
mca_base_param_reg_int_name("ompi_cr", "verbose",
"Verbose output for the OMPI Checkpoint/Restart functionality",
false, false,
0,
&val);
if(0 != val) {
ompi_cr_output = opal_output_open(NULL);
opal_output_set_verbosity(ompi_cr_output, val);
} else {
ompi_cr_output = opal_cr_output;
}
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: init: ompi_cr_init()");
/* Register the OMPI interlevel coordination callback */
opal_cr_reg_coord_callback(ompi_cr_coord, &prev_coord_callback);
#if OPAL_ENABLE_CRDEBUG == 1
/* Check for C/R enabled debugging */
if( MPIR_debug_with_checkpoint ) {
char *uri = NULL;
char *sep = NULL;
char *hostname = NULL;
/* Mark as debuggable with C/R */
MPIR_checkpointable = 1;
/* Set the checkpoint and restart commands */
/* Add the full path to the binary */
asprintf(&MPIR_checkpoint_command,
"%s/ompi-checkpoint --crdebug --hnp-jobid %u",
opal_install_dirs.bindir,
ORTE_PROC_MY_HNP->jobid);
asprintf(&MPIR_restart_command,
"%s/ompi-restart --crdebug ",
opal_install_dirs.bindir);
asprintf(&MPIR_checkpoint_listing_command,
"%s/ompi-checkpoint -l --crdebug ",
opal_install_dirs.bindir);
/* Set contact information for HNP */
uri = strdup(orte_process_info.my_hnp_uri);
hostname = strchr(uri, ';') + 1;
sep = strchr(hostname, ';');
if (sep) {
*sep = 0;
}
if (strncmp(hostname, "tcp://", 6) == 0) {
hostname += 6;
sep = strchr(hostname, ':');
*sep = 0;
MPIR_controller_hostname = strdup(hostname);
} else {
MPIR_controller_hostname = strdup("localhost");
}
/* Cleanup */
if( NULL != uri ) {
free(uri);
uri = NULL;
}
}
#endif
return OMPI_SUCCESS;
}
/*
* Finalize
*/
int ompi_cr_finalize(void)
{
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: finalize: ompi_cr_finalize()");
return OMPI_SUCCESS;
}
/*
* Interlayer coordination callback
*/
int ompi_cr_coord(int state)
{
int ret, exit_status = OMPI_SUCCESS;
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord: ompi_cr_coord(%s)\n",
opal_crs_base_state_str((opal_crs_state_type_t)state));
/*
* Before calling the previous callback, we have the opportunity to
* take action given the state.
*/
if(OPAL_CRS_CHECKPOINT == state) {
/* Do Checkpoint Phase work */
ret = ompi_cr_coord_pre_ckpt();
if( ret == OMPI_EXISTS) {
return ret;
}
else if( ret != OMPI_SUCCESS) {
return ret;
}
}
else if (OPAL_CRS_CONTINUE == state ) {
/* Do Continue Phase work */
ompi_cr_coord_pre_continue();
}
else if (OPAL_CRS_RESTART == state ) {
/* Do Restart Phase work */
ompi_cr_coord_pre_restart();
}
else if (OPAL_CRS_TERM == state ) {
/* Do Continue Phase work in prep to terminate the application */
}
else {
/* We must have been in an error state from the checkpoint
* recreate everything, as in the Continue Phase
*/
}
/*
* Call the previous callback, which should be ORTE [which will handle OPAL]
*/
if(OMPI_SUCCESS != (ret = prev_coord_callback(state)) ) {
exit_status = ret;
goto cleanup;
}
/*
* After calling the previous callback, we have the opportunity to
* take action given the state to tidy up.
*/
if(OPAL_CRS_CHECKPOINT == state) {
/* Do Checkpoint Phase work */
ompi_cr_coord_post_ckpt();
}
else if (OPAL_CRS_CONTINUE == state ) {
/* Do Continue Phase work */
ompi_cr_coord_post_continue();
#if OPAL_ENABLE_CRDEBUG == 1
/*
* If C/R enabled debugging,
* wait here for debugger to attach
*/
if( MPIR_debug_with_checkpoint ) {
MPIR_checkpoint_debugger_breakpoint();
}
#endif
}
else if (OPAL_CRS_RESTART == state ) {
/* Do Restart Phase work */
ompi_cr_coord_post_restart();
#if OPAL_ENABLE_CRDEBUG == 1
/*
* If C/R enabled debugging,
* wait here for debugger to attach
*/
if( MPIR_debug_with_checkpoint ) {
MPIR_checkpoint_debugger_breakpoint();
}
#endif
}
else if (OPAL_CRS_TERM == state ) {
/* Do Continue Phase work in prep to terminate the application */
}
else {
/* We must have been in an error state from the checkpoint
* recreate everything, as in the Continue Phase
*/
}
cleanup:
return exit_status;
}
/*************
* Pre Lower Layer
*************/
static int ompi_cr_coord_pre_ckpt(void) {
int ret, exit_status = OMPI_SUCCESS;
/*
* All the checkpoint heavey lifting in here...
*/
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_pre_ckpt: ompi_cr_coord_pre_ckpt()\n");
/*
* Notify Collectives
* - Need to do this on a per communicator basis
* Traverse all communicators...
*/
if (OMPI_SUCCESS != (ret = notify_collectives(OPAL_CR_CHECKPOINT))) {
goto cleanup;
}
/*
* Notify PML
* - Will notify BML and BTL's
*/
if( ORTE_SUCCESS != (ret = mca_pml.pml_ft_event(OPAL_CRS_CHECKPOINT))) {
exit_status = ret;
goto cleanup;
}
cleanup:
return exit_status;
}
static int ompi_cr_coord_pre_restart(void) {
int ret, exit_status = OMPI_SUCCESS;
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_pre_restart: ompi_cr_coord_pre_restart()");
/*
* Notify PML
* - Will notify BML and BTL's
* - The intention here is to have the PML shutdown all the old components
* and handles. On the second pass (once ORTE is restarted) we can
* reconnect processes.
*/
if( ORTE_SUCCESS != (ret = mca_pml.pml_ft_event(OPAL_CRS_RESTART_PRE))) {
exit_status = ret;
goto cleanup;
}
cleanup:
return exit_status;
}
static int ompi_cr_coord_pre_continue(void) {
int ret, exit_status = OMPI_SUCCESS;
/*
* Can not really do much until ORTE is up and running,
* so defer action until the post_continue function.
*/
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_pre_continue: ompi_cr_coord_pre_continue()");
if( orte_cr_continue_like_restart ) {
/* Mimic ompi_cr_coord_pre_restart(); */
if( ORTE_SUCCESS != (ret = mca_pml.pml_ft_event(OPAL_CRS_CONTINUE))) {
exit_status = ret;
goto cleanup;
}
}
else {
if( opal_cr_timing_barrier_enabled ) {
OPAL_CR_SET_TIMER(OPAL_CR_TIMER_P2PBR1);
}
OPAL_CR_SET_TIMER(OPAL_CR_TIMER_P2P3);
if( opal_cr_timing_barrier_enabled ) {
OPAL_CR_SET_TIMER(OPAL_CR_TIMER_P2PBR2);
}
OPAL_CR_SET_TIMER(OPAL_CR_TIMER_CRCP1);
}
cleanup:
return exit_status;
}
/*************
* Post Lower Layer
*************/
static int ompi_cr_coord_post_ckpt(void) {
/*
* Now that ORTE/OPAL are shutdown, we really can't do much
* so assume pre_ckpt took care of everything.
*/
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_post_ckpt: ompi_cr_coord_post_ckpt()");
return OMPI_SUCCESS;
}
static int ompi_cr_coord_post_restart(void) {
int ret, exit_status = OMPI_SUCCESS;
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_post_restart: ompi_cr_coord_post_restart()");
/*
* Notify PML
* - Will notify BML and BTL's
*/
if( ORTE_SUCCESS != (ret = mca_pml.pml_ft_event(OPAL_CRS_RESTART))) {
exit_status = ret;
goto cleanup;
}
/*
* Notify Collectives
* - Need to do this on a per communicator basis
* Traverse all communicators...
*/
if (OMPI_SUCCESS != (ret = notify_collectives(OPAL_CRS_RESTART))) {
goto cleanup;
}
cleanup:
return exit_status;
}
static int ompi_cr_coord_post_continue(void) {
int ret, exit_status = OMPI_SUCCESS;
opal_output_verbose(10, ompi_cr_output,
"ompi_cr: coord_post_continue: ompi_cr_coord_post_continue()");
/*
* Notify PML
* - Will notify BML and BTL's
*/
if( ORTE_SUCCESS != (ret = mca_pml.pml_ft_event(OPAL_CRS_CONTINUE))) {
exit_status = ret;
goto cleanup;
}
/*
* Notify Collectives
* - Need to do this on a per communicator basis
* Traverse all communicators...
*/
if (OMPI_SUCCESS != (ret = notify_collectives(OPAL_CRS_CONTINUE))) {
goto cleanup;
}
cleanup:
return exit_status;
}