1
1
openmpi/ompi/mca/crcp/bkmrk/crcp_bkmrk_pml.h
Josh Hursey e12ca48cd9 A number of C/R enhancements per RFC below:
http://www.open-mpi.org/community/lists/devel/2010/07/8240.php

Documentation:
  http://osl.iu.edu/research/ft/

Major Changes: 
-------------- 
 * Added C/R-enabled Debugging support. 
   Enabled with the --enable-crdebug flag. See the following website for more information: 
   http://osl.iu.edu/research/ft/crdebug/ 
 * Added Stable Storage (SStore) framework for checkpoint storage 
   * 'central' component does a direct to central storage save 
   * 'stage' component stages checkpoints to central storage while the application continues execution. 
     * 'stage' supports offline compression of checkpoints before moving (sstore_stage_compress) 
     * 'stage' supports local caching of checkpoints to improve automatic recovery (sstore_stage_caching) 
 * Added Compression (compress) framework to support 
 * Add two new ErrMgr recovery policies 
   * {{{crmig}}} C/R Process Migration 
   * {{{autor}}} C/R Automatic Recovery 
 * Added the {{{ompi-migrate}}} command line tool to support the {{{crmig}}} ErrMgr component 
 * Added CR MPI Ext functions (enable them with {{{--enable-mpi-ext=cr}}} configure option) 
   * {{{OMPI_CR_Checkpoint}}} (Fixes trac:2342) 
   * {{{OMPI_CR_Restart}}} 
   * {{{OMPI_CR_Migrate}}} (may need some more work for mapping rules) 
   * {{{OMPI_CR_INC_register_callback}}} (Fixes trac:2192) 
   * {{{OMPI_CR_Quiesce_start}}} 
   * {{{OMPI_CR_Quiesce_checkpoint}}} 
   * {{{OMPI_CR_Quiesce_end}}} 
   * {{{OMPI_CR_self_register_checkpoint_callback}}} 
   * {{{OMPI_CR_self_register_restart_callback}}} 
   * {{{OMPI_CR_self_register_continue_callback}}} 
 * The ErrMgr predicted_fault() interface has been changed to take an opal_list_t of ErrMgr defined types. This will allow us to better support a wider range of fault prediction services in the future. 
 * Add a progress meter to: 
   * FileM rsh (filem_rsh_process_meter) 
   * SnapC full (snapc_full_progress_meter) 
   * SStore stage (sstore_stage_progress_meter) 
 * Added 2 new command line options to ompi-restart 
   * --showme : Display the full command line that would have been exec'ed. 
   * --mpirun_opts : Command line options to pass directly to mpirun. (Fixes trac:2413) 
 * Deprecated some MCA params: 
   * crs_base_snapshot_dir deprecated, use sstore_stage_local_snapshot_dir 
   * snapc_base_global_snapshot_dir deprecated, use sstore_base_global_snapshot_dir 
   * snapc_base_global_shared deprecated, use sstore_stage_global_is_shared 
   * snapc_base_store_in_place deprecated, replaced with different components of SStore 
   * snapc_base_global_snapshot_ref deprecated, use sstore_base_global_snapshot_ref 
   * snapc_base_establish_global_snapshot_dir deprecated, never well supported 
   * snapc_full_skip_filem deprecated, use sstore_stage_skip_filem 

Minor Changes: 
-------------- 
 * Fixes trac:1924 : {{{ompi-restart}}} now recognizes path prefixed checkpoint handles and does the right thing. 
 * Fixes trac:2097 : {{{ompi-info}}} should now report all available CRS components 
 * Fixes trac:2161 : Manual checkpoint movement. A user can 'mv' a checkpoint directory from the original location to another and still restart from it. 
 * Fixes trac:2208 : Honor various TMPDIR varaibles instead of forcing {{{/tmp}}} 
 * Move {{{ompi_cr_continue_like_restart}}} to {{{orte_cr_continue_like_restart}}} to be more flexible in where this should be set. 
 * opal_crs_base_metadata_write* functions have been moved to SStore to support a wider range of metadata handling functionality. 
 * Cleanup the CRS framework and components to work with the SStore framework. 
 * Cleanup the SnapC framework and components to work with the SStore framework (cleans up these code paths considerably). 
 * Add 'quiesce' hook to CRCP for a future enhancement. 
 * We now require a BLCR version that supports {{{cr_request_file()}}} or {{{cr_request_checkpoint()}}} in order to make the code more maintainable. Note that {{{cr_request_file}}} has been deprecated since 0.7.0, so we prefer to use {{{cr_request_checkpoint()}}}. 
 * Add optional application level INC callbacks (registered through the CR MPI Ext interface). 
 * Increase the {{{opal_cr_thread_sleep_wait}}} parameter to 1000 microseconds to make the C/R thread less aggressive. 
 * {{{opal-restart}}} now looks for cache directories before falling back on stable storage when asked. 
 * {{{opal-restart}}} also support local decompression before restarting 
 * {{{orte-checkpoint}}} now uses the SStore framework to work with the metadata 
 * {{{orte-restart}}} now uses the SStore framework to work with the metadata 
 * Remove the {{{orte-restart}}} preload option. This was removed since the user only needs to select the 'stage' component in order to support this functionality. 
 * Since the '-am' parameter is saved in the metadata, {{{ompi-restart}}} no longer hard codes {{{-am ft-enable-cr}}}. 
 * Fix {{{hnp}}} ErrMgr so that if a previous component in the stack has 'fixed' the problem, then it should be skipped. 
 * Make sure to decrement the number of 'num_local_procs' in the orted when one goes away. 
 * odls now checks the SStore framework to see if it needs to load any checkpoint files before launching (to support 'stage'). This separates the SStore logic from the --preload-[binary|files] options. 
 * Add unique IDs to the named pipes established between the orted and the app in SnapC. This is to better support migration and automatic recovery activities. 
 * Improve the checks for 'already checkpointing' error path. 
 * A a recovery output timer, to show how long it takes to restart a job 
 * Do a better job of cleaning up the old session directory on restart. 
 * Add a local module to the autor and crmig ErrMgr components. These small modules prevent the 'orted' component from attempting a local recovery (Which does not work for MPI apps at the moment) 
 * Add a fix for bounding the checkpointable region between MPI_Init and MPI_Finalize. 

This commit was SVN r23587.

The following Trac tickets were found above:
  Ticket 1924 --> https://svn.open-mpi.org/trac/ompi/ticket/1924
  Ticket 2097 --> https://svn.open-mpi.org/trac/ompi/ticket/2097
  Ticket 2161 --> https://svn.open-mpi.org/trac/ompi/ticket/2161
  Ticket 2192 --> https://svn.open-mpi.org/trac/ompi/ticket/2192
  Ticket 2208 --> https://svn.open-mpi.org/trac/ompi/ticket/2208
  Ticket 2342 --> https://svn.open-mpi.org/trac/ompi/ticket/2342
  Ticket 2413 --> https://svn.open-mpi.org/trac/ompi/ticket/2413
2010-08-10 20:51:11 +00:00

460 строки
16 KiB
C

/*
* Copyright (c) 2004-2010 The Trustees of Indiana University.
* All rights reserved.
* Copyright (c) 2004-2005 The Trustees of the University of Tennessee.
* All rights reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
/**
* @file
*
* Hoke CRCP component
*
*/
#ifndef MCA_CRCP_HOKE_PML_EXPORT_H
#define MCA_CRCP_HOKE_PML_EXPORT_H
#include "ompi_config.h"
#include "opal/mca/mca.h"
#include "ompi/mca/crcp/crcp.h"
#include "ompi/communicator/communicator.h"
#include "ompi/mca/crcp/bkmrk/crcp_bkmrk.h"
BEGIN_C_DECLS
/*
* PML Coordination functions
*/
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_enable
( bool enable, ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_add_comm
( struct ompi_communicator_t* comm,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_del_comm
( struct ompi_communicator_t* comm,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_add_procs
( struct ompi_proc_t **procs, size_t nprocs,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_del_procs
( struct ompi_proc_t **procs, size_t nprocs,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_progress
(ompi_crcp_base_pml_state_t* pml_state);
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_iprobe
(int dst, int tag, struct ompi_communicator_t* comm,
int *matched, ompi_status_public_t* status,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_probe
( int dst, int tag, struct ompi_communicator_t* comm,
ompi_status_public_t* status,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_isend_init
( void *buf, size_t count, ompi_datatype_t *datatype,
int dst, int tag, mca_pml_base_send_mode_t mode,
struct ompi_communicator_t* comm,
struct ompi_request_t **request,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_isend
( void *buf, size_t count, ompi_datatype_t *datatype,
int dst, int tag, mca_pml_base_send_mode_t mode,
struct ompi_communicator_t* comm,
struct ompi_request_t **request,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_send
( void *buf, size_t count, ompi_datatype_t *datatype,
int dst, int tag, mca_pml_base_send_mode_t mode,
struct ompi_communicator_t* comm,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_irecv_init
( void *buf, size_t count, ompi_datatype_t *datatype,
int src, int tag, struct ompi_communicator_t* comm,
struct ompi_request_t **request,
ompi_crcp_base_pml_state_t* pml_state);
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_irecv
( void *buf, size_t count, ompi_datatype_t *datatype,
int src, int tag, struct ompi_communicator_t* comm,
struct ompi_request_t **request,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_recv
( void *buf, size_t count, ompi_datatype_t *datatype,
int src, int tag, struct ompi_communicator_t* comm,
ompi_status_public_t* status,
ompi_crcp_base_pml_state_t* pml_state);
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_dump
( struct ompi_communicator_t* comm, int verbose,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_start
( size_t count, ompi_request_t** requests,
ompi_crcp_base_pml_state_t* pml_state );
ompi_crcp_base_pml_state_t* ompi_crcp_bkmrk_pml_ft_event
(int state, ompi_crcp_base_pml_state_t* pml_state);
enum ompi_crcp_bkmrk_pml_quiesce_tag_type_t {
QUIESCE_TAG_NONE = 0, /* 0 No tag specified */
QUIESCE_TAG_CKPT, /* 1 Prepare for checkpoint */
QUIESCE_TAG_CONTINUE, /* 2 Continue after a checkpoint */
QUIESCE_TAG_RESTART, /* 3 Restart from a checkpoint */
QUIESCE_TAG_UNKNOWN /* 4 Unknown */
};
typedef enum ompi_crcp_bkmrk_pml_quiesce_tag_type_t ompi_crcp_bkmrk_pml_quiesce_tag_type_t;
int ompi_crcp_bkmrk_pml_quiesce_start(ompi_crcp_bkmrk_pml_quiesce_tag_type_t tag );
int ompi_crcp_bkmrk_pml_quiesce_end(ompi_crcp_bkmrk_pml_quiesce_tag_type_t tag );
/*
* Request function
*/
int ompi_crcp_bkmrk_request_complete(struct ompi_request_t *request);
/***********************************
* Globally Defined Structures
***********************************/
/*
* Types of Messages
*/
enum ompi_crcp_bkmrk_pml_message_type_t {
COORD_MSG_TYPE_UNKNOWN, /* 0 Unknown type */
COORD_MSG_TYPE_B_SEND, /* 1 Blocking Send */
COORD_MSG_TYPE_I_SEND, /* 2 Non-Blocking Send */
COORD_MSG_TYPE_P_SEND, /* 3 Persistent Send */
COORD_MSG_TYPE_B_RECV, /* 4 Blocking Recv */
COORD_MSG_TYPE_I_RECV, /* 5 Non-Blocking Recv */
COORD_MSG_TYPE_P_RECV /* 6 Persistent Recv */
};
typedef enum ompi_crcp_bkmrk_pml_message_type_t ompi_crcp_bkmrk_pml_message_type_t;
/*
* A list structure to contain {buffer, request, status} sets
*
* send/recv type | Buffer | Request | Status | Active
* ---------------+--------+---------+--------+--------
* Blocking | No | No | No | No
* Non-Blocking | No | Yes | Yes | No
* Persistent | Yes | Yes | Yes | Yes
*
* No : Does not require this field
* Yes: Does require this field
*/
struct ompi_crcp_bkmrk_pml_message_content_ref_t {
/** This is a list object */
opal_list_item_t super;
/** Buffer for data */
void * buffer;
/* Request for this message */
ompi_request_t *request;
/** Status */
ompi_status_public_t status;
/** Active ? */
bool active;
/** Done ? - Only useful in Drain*/
bool done;
/** Already_posted ? - Only useful in Drain */
bool already_posted;
/** Drained */
bool already_drained;
/** JJH XXX Debug counter*/
uint64_t msg_id;
};
typedef struct ompi_crcp_bkmrk_pml_message_content_ref_t ompi_crcp_bkmrk_pml_message_content_ref_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_message_content_ref_t);
void ompi_crcp_bkmrk_pml_message_content_ref_construct(ompi_crcp_bkmrk_pml_message_content_ref_t *content_ref);
void ompi_crcp_bkmrk_pml_message_content_ref_destruct( ompi_crcp_bkmrk_pml_message_content_ref_t *content_ref);
/*
* Drain Message Reference
* - The first section of this structure should match
* ompi_crcp_bkmrk_pml_traffic_message_ref_t exactly.
*/
struct ompi_crcp_bkmrk_pml_drain_message_ref_t {
/** This is a list object */
opal_list_item_t super;
/** Sequence Number of this message */
uint64_t msg_id;
/** Type of message this references */
ompi_crcp_bkmrk_pml_message_type_t msg_type;
/** Count for data */
size_t count;
/** Datatype */
struct ompi_datatype_t * datatype;
/** Quick reference to the size of the datatype */
size_t ddt_size;
/** Message Tag */
int tag;
/** Peer rank to which it was sent/recv'ed if known */
int rank;
/** Communicator pointer */
ompi_communicator_t* comm;
/** Message Contents */
opal_list_t msg_contents;
/** Peer which we received from */
orte_process_name_t proc_name;
/** Is this message complete WRT PML semantics?
* true = message done on this side (send or receive)
* false = message still in process (sending or receiving)
*/
int done;
/** Is the message actively being worked on?
* true = Message is !done, and is in the progress cycle
* false = Message is !done and is *not* in the progress cycle ( [send/recv]_init requests)
*/
int active;
/** Has this message been posted?
* true = message was posted (Send or recv)
* false = message was not yet posted.
* Used when trying to figure out which messages the drain protocol needs to post, and
* which message have already been posted for it.
*/
int already_posted;
};
typedef struct ompi_crcp_bkmrk_pml_drain_message_ref_t ompi_crcp_bkmrk_pml_drain_message_ref_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_drain_message_ref_t);
void ompi_crcp_bkmrk_pml_drain_message_ref_construct(ompi_crcp_bkmrk_pml_drain_message_ref_t *msg_ref);
void ompi_crcp_bkmrk_pml_drain_message_ref_destruct( ompi_crcp_bkmrk_pml_drain_message_ref_t *msg_ref);
/*
* List of Pending ACKs to drained messages
*/
struct ompi_crcp_bkmrk_pml_drain_message_ack_ref_t {
/** This is a list object */
opal_list_item_t super;
/** Complete flag */
bool complete;
/** Peer which we received from */
orte_process_name_t peer;
};
typedef struct ompi_crcp_bkmrk_pml_drain_message_ack_ref_t ompi_crcp_bkmrk_pml_drain_message_ack_ref_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_drain_message_ack_ref_t);
void ompi_crcp_bkmrk_pml_drain_message_ack_ref_construct(ompi_crcp_bkmrk_pml_drain_message_ack_ref_t *msg_ack_ref);
void ompi_crcp_bkmrk_pml_drain_message_ack_ref_destruct( ompi_crcp_bkmrk_pml_drain_message_ack_ref_t *msg_ack_ref);
/*
* Regular Traffic Message Reference
* Tracks message signature {count, datatype_size, tag, comm, peer}
*/
struct ompi_crcp_bkmrk_pml_traffic_message_ref_t {
/** This is a list object */
opal_list_item_t super;
/** Sequence Number of this message */
uint64_t msg_id;
/** Type of message this references */
ompi_crcp_bkmrk_pml_message_type_t msg_type;
/** Count for data */
size_t count;
/** Quick reference to the size of the datatype */
size_t ddt_size;
/** Message Tag */
int tag;
/** Peer rank to which it was sent/recv'ed if known */
int rank;
/** Communicator pointer */
ompi_communicator_t* comm;
/** Message Contents */
opal_list_t msg_contents;
/** Peer which we received from */
orte_process_name_t proc_name;
/* Sample movement of values (mirrored for send):
* Recv() iRecv() irecv_init() start() req_complete()
* * Pre:
* matched = false false false --- ---
* done = false false false --- true
* active = true true false true false
* already_posted = true true true --- ---
* * Post:
* matched = false false false --- ---
* done = true false false false true
* active = false true false true false
* already_posted = true true true --- ---
* * Drain
* already_posted = false -> true when posted irecv
*/
/** Has this message been matched by the peer?
* - Resolved during bookmark exchange
* true = peer confirmed the receipt of this message
* false = unknown if peer has received this message or not
*/
int matched;
/** Is this message complete WRT PML semantics?
* - Is it not in-flight?
* true = message done on this side (send or receive)
* false = message still in process (sending or receiving)
*/
int done;
/** Is the message actively being worked on?
* - Known to be in-flight?
* true = Message is !done, and is in the progress cycle
* false = Message is !done and is *not* in the progress cycle ( [send/recv]_init requests)
*/
int active;
/** How many times a persistent send/recv has been posted, but not activated.
*
*/
int posted;
/** Actively drained
* These are messages that are active, and being drained. So if we checkpoint while the drain
* list is not empty then we do not try to count these messages more than once.
*/
int active_drain;
};
typedef struct ompi_crcp_bkmrk_pml_traffic_message_ref_t ompi_crcp_bkmrk_pml_traffic_message_ref_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_traffic_message_ref_t);
void ompi_crcp_bkmrk_pml_traffic_message_ref_construct(ompi_crcp_bkmrk_pml_traffic_message_ref_t *msg_ref);
void ompi_crcp_bkmrk_pml_traffic_message_ref_destruct( ompi_crcp_bkmrk_pml_traffic_message_ref_t *msg_ref);
/*
* A structure for a single process
* Contains:
* - List of sent messages to this peer
* - List of received message from this peer
* - Message totals
*/
struct ompi_crcp_bkmrk_pml_peer_ref_t {
/** This is a list object */
opal_list_item_t super;
/** Name of peer */
orte_process_name_t proc_name;
/** List of messages sent to this peer */
opal_list_t send_list; /**< pml_send */
opal_list_t isend_list; /**< pml_isend */
opal_list_t send_init_list; /**< pml_isend_init */
/** List of messages recved from this peer */
opal_list_t recv_list; /**< pml_recv */
opal_list_t irecv_list; /**< pml_irecv */
opal_list_t recv_init_list; /**< pml_irecv_init */
/** List of messages drained from this peer */
opal_list_t drained_list;
/*
* These are totals over all communicators provided for convenience.
*
* If we are P_n and this structure represent P_m then:
* - total_* = P_n --> P_m
* - matched_* = P_n <-- P_m
* Where P_n --> P_m means:
* the number of messages P_n knows that it has sent/recv to/from P_m
* And P_n --> P_m means:
* the number of messages P_m told us that is has sent/recv to/from P_n
*
* How total* are used:
* Send:
* Before put on the wire: ++total
* Recv:
* Once completed: ++total
*/
/** Total Number of messages sent */
uint32_t total_msgs_sent;
uint32_t matched_msgs_sent;
/** Total Number of messages received */
uint32_t total_msgs_recvd;
uint32_t matched_msgs_recvd;
/** Total Number of messages drained */
uint32_t total_drained_msgs;
/** If peer is expecting an ACK after draining the messages */
bool ack_required;
};
typedef struct ompi_crcp_bkmrk_pml_peer_ref_t ompi_crcp_bkmrk_pml_peer_ref_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_peer_ref_t);
void ompi_crcp_bkmrk_pml_peer_ref_construct(ompi_crcp_bkmrk_pml_peer_ref_t *bkm_proc);
void ompi_crcp_bkmrk_pml_peer_ref_destruct( ompi_crcp_bkmrk_pml_peer_ref_t *bkm_proc);
/*
* Local version of the PML state
*/
struct ompi_crcp_bkmrk_pml_state_t {
ompi_crcp_base_pml_state_t p_super;
ompi_crcp_base_pml_state_t *prev_ptr;
ompi_crcp_bkmrk_pml_peer_ref_t *peer_ref;
ompi_crcp_bkmrk_pml_traffic_message_ref_t *msg_ref;
};
typedef struct ompi_crcp_bkmrk_pml_state_t ompi_crcp_bkmrk_pml_state_t;
OBJ_CLASS_DECLARATION(ompi_crcp_bkmrk_pml_state_t);
/***********************************
* Globally Defined Variables
***********************************/
/*
* List of known peers
*/
extern opal_list_t ompi_crcp_bkmrk_pml_peer_refs;
END_C_DECLS
#endif /* MCA_CRCP_HOKE_PML_EXPORT_H */