1
1
openmpi/ompi/mpiext/cr/c/quiesce_start.c
Josh Hursey e12ca48cd9 A number of C/R enhancements per RFC below:
http://www.open-mpi.org/community/lists/devel/2010/07/8240.php

Documentation:
  http://osl.iu.edu/research/ft/

Major Changes: 
-------------- 
 * Added C/R-enabled Debugging support. 
   Enabled with the --enable-crdebug flag. See the following website for more information: 
   http://osl.iu.edu/research/ft/crdebug/ 
 * Added Stable Storage (SStore) framework for checkpoint storage 
   * 'central' component does a direct to central storage save 
   * 'stage' component stages checkpoints to central storage while the application continues execution. 
     * 'stage' supports offline compression of checkpoints before moving (sstore_stage_compress) 
     * 'stage' supports local caching of checkpoints to improve automatic recovery (sstore_stage_caching) 
 * Added Compression (compress) framework to support 
 * Add two new ErrMgr recovery policies 
   * {{{crmig}}} C/R Process Migration 
   * {{{autor}}} C/R Automatic Recovery 
 * Added the {{{ompi-migrate}}} command line tool to support the {{{crmig}}} ErrMgr component 
 * Added CR MPI Ext functions (enable them with {{{--enable-mpi-ext=cr}}} configure option) 
   * {{{OMPI_CR_Checkpoint}}} (Fixes trac:2342) 
   * {{{OMPI_CR_Restart}}} 
   * {{{OMPI_CR_Migrate}}} (may need some more work for mapping rules) 
   * {{{OMPI_CR_INC_register_callback}}} (Fixes trac:2192) 
   * {{{OMPI_CR_Quiesce_start}}} 
   * {{{OMPI_CR_Quiesce_checkpoint}}} 
   * {{{OMPI_CR_Quiesce_end}}} 
   * {{{OMPI_CR_self_register_checkpoint_callback}}} 
   * {{{OMPI_CR_self_register_restart_callback}}} 
   * {{{OMPI_CR_self_register_continue_callback}}} 
 * The ErrMgr predicted_fault() interface has been changed to take an opal_list_t of ErrMgr defined types. This will allow us to better support a wider range of fault prediction services in the future. 
 * Add a progress meter to: 
   * FileM rsh (filem_rsh_process_meter) 
   * SnapC full (snapc_full_progress_meter) 
   * SStore stage (sstore_stage_progress_meter) 
 * Added 2 new command line options to ompi-restart 
   * --showme : Display the full command line that would have been exec'ed. 
   * --mpirun_opts : Command line options to pass directly to mpirun. (Fixes trac:2413) 
 * Deprecated some MCA params: 
   * crs_base_snapshot_dir deprecated, use sstore_stage_local_snapshot_dir 
   * snapc_base_global_snapshot_dir deprecated, use sstore_base_global_snapshot_dir 
   * snapc_base_global_shared deprecated, use sstore_stage_global_is_shared 
   * snapc_base_store_in_place deprecated, replaced with different components of SStore 
   * snapc_base_global_snapshot_ref deprecated, use sstore_base_global_snapshot_ref 
   * snapc_base_establish_global_snapshot_dir deprecated, never well supported 
   * snapc_full_skip_filem deprecated, use sstore_stage_skip_filem 

Minor Changes: 
-------------- 
 * Fixes trac:1924 : {{{ompi-restart}}} now recognizes path prefixed checkpoint handles and does the right thing. 
 * Fixes trac:2097 : {{{ompi-info}}} should now report all available CRS components 
 * Fixes trac:2161 : Manual checkpoint movement. A user can 'mv' a checkpoint directory from the original location to another and still restart from it. 
 * Fixes trac:2208 : Honor various TMPDIR varaibles instead of forcing {{{/tmp}}} 
 * Move {{{ompi_cr_continue_like_restart}}} to {{{orte_cr_continue_like_restart}}} to be more flexible in where this should be set. 
 * opal_crs_base_metadata_write* functions have been moved to SStore to support a wider range of metadata handling functionality. 
 * Cleanup the CRS framework and components to work with the SStore framework. 
 * Cleanup the SnapC framework and components to work with the SStore framework (cleans up these code paths considerably). 
 * Add 'quiesce' hook to CRCP for a future enhancement. 
 * We now require a BLCR version that supports {{{cr_request_file()}}} or {{{cr_request_checkpoint()}}} in order to make the code more maintainable. Note that {{{cr_request_file}}} has been deprecated since 0.7.0, so we prefer to use {{{cr_request_checkpoint()}}}. 
 * Add optional application level INC callbacks (registered through the CR MPI Ext interface). 
 * Increase the {{{opal_cr_thread_sleep_wait}}} parameter to 1000 microseconds to make the C/R thread less aggressive. 
 * {{{opal-restart}}} now looks for cache directories before falling back on stable storage when asked. 
 * {{{opal-restart}}} also support local decompression before restarting 
 * {{{orte-checkpoint}}} now uses the SStore framework to work with the metadata 
 * {{{orte-restart}}} now uses the SStore framework to work with the metadata 
 * Remove the {{{orte-restart}}} preload option. This was removed since the user only needs to select the 'stage' component in order to support this functionality. 
 * Since the '-am' parameter is saved in the metadata, {{{ompi-restart}}} no longer hard codes {{{-am ft-enable-cr}}}. 
 * Fix {{{hnp}}} ErrMgr so that if a previous component in the stack has 'fixed' the problem, then it should be skipped. 
 * Make sure to decrement the number of 'num_local_procs' in the orted when one goes away. 
 * odls now checks the SStore framework to see if it needs to load any checkpoint files before launching (to support 'stage'). This separates the SStore logic from the --preload-[binary|files] options. 
 * Add unique IDs to the named pipes established between the orted and the app in SnapC. This is to better support migration and automatic recovery activities. 
 * Improve the checks for 'already checkpointing' error path. 
 * A a recovery output timer, to show how long it takes to restart a job 
 * Do a better job of cleaning up the old session directory on restart. 
 * Add a local module to the autor and crmig ErrMgr components. These small modules prevent the 'orted' component from attempting a local recovery (Which does not work for MPI apps at the moment) 
 * Add a fix for bounding the checkpointable region between MPI_Init and MPI_Finalize. 

This commit was SVN r23587.

The following Trac tickets were found above:
  Ticket 1924 --> https://svn.open-mpi.org/trac/ompi/ticket/1924
  Ticket 2097 --> https://svn.open-mpi.org/trac/ompi/ticket/2097
  Ticket 2161 --> https://svn.open-mpi.org/trac/ompi/ticket/2161
  Ticket 2192 --> https://svn.open-mpi.org/trac/ompi/ticket/2192
  Ticket 2208 --> https://svn.open-mpi.org/trac/ompi/ticket/2208
  Ticket 2342 --> https://svn.open-mpi.org/trac/ompi/ticket/2342
  Ticket 2413 --> https://svn.open-mpi.org/trac/ompi/ticket/2413
2010-08-10 20:51:11 +00:00

211 строки
5.7 KiB
C

/*
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#include "ompi_config.h"
#include <stdio.h>
#include "ompi/mpi/c/bindings.h"
#include "ompi/info/info.h"
#include "ompi/runtime/params.h"
#include "ompi/communicator/communicator.h"
#include "orte/mca/snapc/snapc.h"
#include "ompi/mpiext/cr/mpiext_cr_c.h"
static const char FUNC_NAME[] = "OMPI_CR_Quiesce_start";
int OMPI_CR_Quiesce_start(MPI_Comm commP, MPI_Info *info)
{
int ret = MPI_SUCCESS;
MPI_Comm comm = MPI_COMM_WORLD; /* Currently ignore provided comm */
orte_snapc_base_request_op_t *datum = NULL;
int my_rank;
/* argument checking */
if (MPI_PARAM_CHECK) {
OMPI_ERR_INIT_FINALIZE(FUNC_NAME);
}
/*
* Setup the data structure for the operation
*/
datum = OBJ_NEW(orte_snapc_base_request_op_t);
datum->event = ORTE_SNAPC_OP_QUIESCE_START;
datum->is_active = true;
MPI_Comm_rank(comm, &my_rank);
if( 0 == my_rank ) {
datum->leader = ORTE_PROC_MY_NAME->vpid;
} else {
datum->leader = -1; /* Unknown from non-root ranks */
}
/*
* All processes must make this call before it can start
*/
MPI_Barrier(comm);
/*
* Leader sends the request
*/
OPAL_CR_ENTER_LIBRARY();
ret = orte_snapc.request_op(datum);
/*ret = ompi_crcp_base_quiesce_start(info);*/
if( OMPI_SUCCESS != ret ) {
OBJ_RELEASE(datum);
OMPI_ERRHANDLER_INVOKE(comm, MPI_ERR_OTHER,
FUNC_NAME);
}
OPAL_CR_EXIT_LIBRARY();
datum->is_active = false;
OBJ_RELEASE(datum);
/*
* (Old) info logic
*/
/*ompi_info_set((ompi_info_t*)*info, "target", cur_datum.target_dir);*/
return ret;
}
/*****************
* Local Functions
******************/
#if 0
/* Info keys:
*
* - crs:
* none = (Default) No CRS Service
* default = Whatever CRS service MPI chooses
* blcr = BLCR
* self = app level callbacks
*
* - cmdline:
* Command line to restart the process with.
* If empty, the user must manually enter it
*
* - target:
* Absolute path to the target directory.
*
* - handle:
* first = Earliest checkpoint directory available
* last = Most recent checkpoint directory available
* [global:local] = handle provided by the MPI library
*
* - restarting:
* 0 = not restarting
* 1 = restarting
*
* - checkpointing:
* 0 = No need to prepare for checkpointing
* 1 = MPI should prepare for checkpointing
*
* - inflight:
* default = message
* message = Drain inflight messages at the message level
* network = Drain inflight messages at the network level (if possible)
*
* - user_space_mem:
* 0 = Memory does not need to be managed
* 1 = Memory must be in user space (i.e., not on network card
*
*/
static int extract_info_into_datum(ompi_info_t *info, orte_snapc_base_quiesce_t *datum)
{
int info_flag = false;
int max_crs_len = 32;
bool info_bool = false;
char *info_char = NULL;
info_char = (char *) malloc(sizeof(char) * (OPAL_PATH_MAX+1));
/*
* Key: crs
*/
ompi_info_get(info, "crs", max_crs_len, info_char, &info_flag);
if( info_flag) {
datum->crs_name = strdup(info_char);
}
/*
* Key: cmdline
*/
ompi_info_get(info, "cmdline", OPAL_PATH_MAX, info_char, &info_flag);
if( info_flag) {
datum->cmdline = strdup(info_char);
}
/*
* Key: handle
*/
ompi_info_get(info, "handle", OPAL_PATH_MAX, info_char, &info_flag);
if( info_flag) {
datum->handle = strdup(info_char);
}
/*
* Key: target
*/
ompi_info_get(info, "target", OPAL_PATH_MAX, info_char, &info_flag);
if( info_flag) {
datum->target_dir = strdup(info_char);
}
/*
* Key: restarting
*/
ompi_info_get_bool(info, "restarting", &info_bool, &info_flag);
if( info_flag ) {
datum->restarting = info_bool;
} else {
datum->restarting = false;
}
/*
* Key: checkpointing
*/
ompi_info_get_bool(info, "checkpointing", &info_bool, &info_flag);
if( info_flag ) {
datum->checkpointing = info_bool;
} else {
datum->checkpointing = false;
}
/*
* Display all values
*/
OPAL_OUTPUT_VERBOSE((3, mca_crcp_bkmrk_component.super.output_handle,
"crcp:bkmrk: %s extract_info: Info('crs' = '%s')",
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
(NULL == datum->crs_name ? "Default (none)" : datum->crs_name)));
OPAL_OUTPUT_VERBOSE((3, mca_crcp_bkmrk_component.super.output_handle,
"crcp:bkmrk: %s extract_info: Info('cmdline' = '%s')",
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
(NULL == datum->cmdline ? "Default ()" : datum->cmdline)));
OPAL_OUTPUT_VERBOSE((3, mca_crcp_bkmrk_component.super.output_handle,
"crcp:bkmrk: %s extract_info: Info('checkpointing' = '%c')",
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
(datum->checkpointing ? 'T' : 'F')));
OPAL_OUTPUT_VERBOSE((3, mca_crcp_bkmrk_component.super.output_handle,
"crcp:bkmrk: %s extract_info: Info('restarting' = '%c')",
ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
(datum->restarting ? 'T' : 'F')));
if( NULL != info_char ) {
free(info_char);
info_char = NULL;
}
return ORTE_SUCCESS;
}
#endif