2007-07-26 02:28:04 +04:00
|
|
|
/*
|
2011-10-04 18:50:31 +04:00
|
|
|
* Copyright (c) 2004-2011 The Trustees of the University of Tennessee.
|
2007-07-26 02:28:04 +04:00
|
|
|
* All rights reserved.
|
2012-04-06 18:23:13 +04:00
|
|
|
* Copyright (c) 2012 Los Alamos National Security, LLC. All rights
|
|
|
|
* reserved.
|
2007-07-26 02:28:04 +04:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2008-01-05 03:17:32 +03:00
|
|
|
#include "ompi_config.h"
|
2007-07-31 20:01:32 +04:00
|
|
|
#include "vprotocol_pessimist_eventlog.h"
|
2007-07-21 01:36:11 +04:00
|
|
|
|
2008-10-01 22:42:43 +04:00
|
|
|
#include "ompi/mca/dpm/dpm.h"
|
|
|
|
#include "ompi/mca/pubsub/pubsub.h"
|
|
|
|
|
2009-03-17 20:35:28 +03:00
|
|
|
int vprotocol_pessimist_event_logger_connect(int el_rank, ompi_communicator_t **el_comm)
|
2008-10-01 22:42:43 +04:00
|
|
|
{
|
|
|
|
int rc;
|
As per the RFC, bring in the ORTE async progress code and the rewrite of OOB:
*** THIS RFC INCLUDES A MINOR CHANGE TO THE MPI-RTE INTERFACE ***
Note: during the course of this work, it was necessary to completely separate the MPI and RTE progress engines. There were multiple places in the MPI layer where ORTE_WAIT_FOR_COMPLETION was being used. A new OMPI_WAIT_FOR_COMPLETION macro was created (defined in ompi/mca/rte/rte.h) that simply cycles across opal_progress until the provided flag becomes false. Places where the MPI layer blocked waiting for RTE to complete an event have been modified to use this macro.
***************************************************************************************
I am reissuing this RFC because of the time that has passed since its original release. Since its initial release and review, I have debugged it further to ensure it fully supports tests like loop_spawn. It therefore seems ready for merge back to the trunk. Given its prior review, I have set the timeout for one week.
The code is in https://bitbucket.org/rhc/ompi-oob2
WHAT: Rewrite of ORTE OOB
WHY: Support asynchronous progress and a host of other features
WHEN: Wed, August 21
SYNOPSIS:
The current OOB has served us well, but a number of limitations have been identified over the years. Specifically:
* it is only progressed when called via opal_progress, which can lead to hangs or recursive calls into libevent (which is not supported by that code)
* we've had issues when multiple NICs are available as the code doesn't "shift" messages between transports - thus, all nodes had to be available via the same TCP interface.
* the OOB "unloads" incoming opal_buffer_t objects during the transmission, thus preventing use of OBJ_RETAIN in the code when repeatedly sending the same message to multiple recipients
* there is no failover mechanism across NICs - if the selected NIC (or its attached switch) fails, we are forced to abort
* only one transport (i.e., component) can be "active"
The revised OOB resolves these problems:
* async progress is used for all application processes, with the progress thread blocking in the event library
* each available TCP NIC is supported by its own TCP module. The ability to asynchronously progress each module independently is provided, but not enabled by default (a runtime MCA parameter turns it "on")
* multi-address TCP NICs (e.g., a NIC with both an IPv4 and IPv6 address, or with virtual interfaces) are supported - reachability is determined by comparing the contact info for a peer against all addresses within the range covered by the address/mask pairs for the NIC.
* a message that arrives on one TCP NIC is automatically shifted to whatever NIC that is connected to the next "hop" if that peer cannot be reached by the incoming NIC. If no TCP module will reach the peer, then the OOB attempts to send the message via all other available components - if none can reach the peer, then an "error" is reported back to the RML, which then calls the errmgr for instructions.
* opal_buffer_t now conforms to standard object rules re OBJ_RETAIN as we no longer "unload" the incoming object
* NIC failure is reported to the TCP component, which then tries to resend the message across any other available TCP NIC. If that doesn't work, then the message is given back to the OOB base to try using other components. If all that fails, then the error is reported to the RML, which reports to the errmgr for instructions
* obviously from the above, multiple OOB components (e.g., TCP and UD) can be active in parallel
* the matching code has been moved to the RML (and out of the OOB/TCP component) so it is independent of transport
* routing is done by the individual OOB modules (as opposed to the RML). Thus, both routed and non-routed transports can simultaneously be active
* all blocking send/recv APIs have been removed. Everything operates asynchronously.
KNOWN LIMITATIONS:
* although provision is made for component failover as described above, the code for doing so has not been fully implemented yet. At the moment, if all connections for a given peer fail, the errmgr is notified of a "lost connection", which by default results in termination of the job if it was a lifeline
* the IPv6 code is present and compiles, but is not complete. Since the current IPv6 support in the OOB doesn't work anyway, I don't consider this a blocker
* routing is performed at the individual module level, yet the active routed component is selected on a global basis. We probably should update that to reflect that different transports may need/choose to route in different ways
* obviously, not every error path has been tested nor necessarily covered
* determining abnormal termination is more challenging than in the old code as we now potentially have multiple ways of connecting to a process. Ideally, we would declare "connection failed" when *all* transports can no longer reach the process, but that requires some additional (possibly complex) code. For now, the code replicates the old behavior only somewhat modified - i.e., if a module sees its connection fail, it checks to see if it is a lifeline. If so, it notifies the errmgr that the lifeline is lost - otherwise, it notifies the errmgr that a non-lifeline connection was lost.
* reachability is determined solely on the basis of a shared subnet address/mask - more sophisticated algorithms (e.g., the one used in the tcp btl) are required to handle routing via gateways
* the RML needs to assign sequence numbers to each message on a per-peer basis. The receiving RML will then deliver messages in order, thus preventing out-of-order messaging in the case where messages travel across different transports or a message needs to be redirected/resent due to failure of a NIC
This commit was SVN r29058.
2013-08-22 20:37:40 +04:00
|
|
|
opal_buffer_t *buffer;
|
2008-10-01 22:42:43 +04:00
|
|
|
char *port;
|
2013-01-28 03:25:10 +04:00
|
|
|
ompi_process_name_t el_proc;
|
2008-10-01 22:42:43 +04:00
|
|
|
char *hnp_uri, *rml_uri;
|
2013-01-28 03:25:10 +04:00
|
|
|
ompi_rml_tag_t el_tag;
|
2009-03-17 20:35:28 +03:00
|
|
|
char name[MPI_MAX_PORT_NAME];
|
|
|
|
int rank;
|
|
|
|
vprotocol_pessimist_clock_t connect_info[2];
|
2008-10-01 22:42:43 +04:00
|
|
|
|
2009-03-17 20:35:28 +03:00
|
|
|
snprintf(name, MPI_MAX_PORT_NAME, VPROTOCOL_EVENT_LOGGER_NAME_FMT, el_rank);
|
|
|
|
port = ompi_pubsub.lookup(name, MPI_INFO_NULL);
|
|
|
|
if(NULL == port)
|
|
|
|
{
|
2011-10-04 18:50:31 +04:00
|
|
|
return OMPI_ERR_NOT_FOUND;
|
2009-03-17 20:35:28 +03:00
|
|
|
}
|
2008-10-01 22:42:43 +04:00
|
|
|
V_OUTPUT_VERBOSE(45, "Found port < %s >", port);
|
|
|
|
|
|
|
|
/* separate the string into the HNP and RML URI and tag */
|
2011-10-04 18:50:31 +04:00
|
|
|
if (OMPI_SUCCESS != (rc = ompi_dpm.parse_port(port, &hnp_uri, &rml_uri, &el_tag))) {
|
2013-01-28 03:25:10 +04:00
|
|
|
OMPI_ERROR_LOG(rc);
|
2008-10-01 22:42:43 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* extract the originating proc's name */
|
2013-01-28 03:25:10 +04:00
|
|
|
if (OMPI_SUCCESS != (rc = ompi_rte_parse_uris(rml_uri, &el_proc, NULL))) {
|
|
|
|
OMPI_ERROR_LOG(rc);
|
2008-10-01 22:42:43 +04:00
|
|
|
free(rml_uri); free(hnp_uri);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
/* make sure we can route rml messages to the destination */
|
2011-10-04 18:50:31 +04:00
|
|
|
if (OMPI_SUCCESS != (rc = ompi_dpm.route_to_port(hnp_uri, &el_proc))) {
|
2013-01-28 03:25:10 +04:00
|
|
|
OMPI_ERROR_LOG(rc);
|
2008-10-01 22:42:43 +04:00
|
|
|
free(rml_uri); free(hnp_uri);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
free(rml_uri); free(hnp_uri);
|
|
|
|
|
|
|
|
/* Send an rml message to tell the remote end to wake up and jump into
|
|
|
|
* connect/accept */
|
As per the RFC, bring in the ORTE async progress code and the rewrite of OOB:
*** THIS RFC INCLUDES A MINOR CHANGE TO THE MPI-RTE INTERFACE ***
Note: during the course of this work, it was necessary to completely separate the MPI and RTE progress engines. There were multiple places in the MPI layer where ORTE_WAIT_FOR_COMPLETION was being used. A new OMPI_WAIT_FOR_COMPLETION macro was created (defined in ompi/mca/rte/rte.h) that simply cycles across opal_progress until the provided flag becomes false. Places where the MPI layer blocked waiting for RTE to complete an event have been modified to use this macro.
***************************************************************************************
I am reissuing this RFC because of the time that has passed since its original release. Since its initial release and review, I have debugged it further to ensure it fully supports tests like loop_spawn. It therefore seems ready for merge back to the trunk. Given its prior review, I have set the timeout for one week.
The code is in https://bitbucket.org/rhc/ompi-oob2
WHAT: Rewrite of ORTE OOB
WHY: Support asynchronous progress and a host of other features
WHEN: Wed, August 21
SYNOPSIS:
The current OOB has served us well, but a number of limitations have been identified over the years. Specifically:
* it is only progressed when called via opal_progress, which can lead to hangs or recursive calls into libevent (which is not supported by that code)
* we've had issues when multiple NICs are available as the code doesn't "shift" messages between transports - thus, all nodes had to be available via the same TCP interface.
* the OOB "unloads" incoming opal_buffer_t objects during the transmission, thus preventing use of OBJ_RETAIN in the code when repeatedly sending the same message to multiple recipients
* there is no failover mechanism across NICs - if the selected NIC (or its attached switch) fails, we are forced to abort
* only one transport (i.e., component) can be "active"
The revised OOB resolves these problems:
* async progress is used for all application processes, with the progress thread blocking in the event library
* each available TCP NIC is supported by its own TCP module. The ability to asynchronously progress each module independently is provided, but not enabled by default (a runtime MCA parameter turns it "on")
* multi-address TCP NICs (e.g., a NIC with both an IPv4 and IPv6 address, or with virtual interfaces) are supported - reachability is determined by comparing the contact info for a peer against all addresses within the range covered by the address/mask pairs for the NIC.
* a message that arrives on one TCP NIC is automatically shifted to whatever NIC that is connected to the next "hop" if that peer cannot be reached by the incoming NIC. If no TCP module will reach the peer, then the OOB attempts to send the message via all other available components - if none can reach the peer, then an "error" is reported back to the RML, which then calls the errmgr for instructions.
* opal_buffer_t now conforms to standard object rules re OBJ_RETAIN as we no longer "unload" the incoming object
* NIC failure is reported to the TCP component, which then tries to resend the message across any other available TCP NIC. If that doesn't work, then the message is given back to the OOB base to try using other components. If all that fails, then the error is reported to the RML, which reports to the errmgr for instructions
* obviously from the above, multiple OOB components (e.g., TCP and UD) can be active in parallel
* the matching code has been moved to the RML (and out of the OOB/TCP component) so it is independent of transport
* routing is done by the individual OOB modules (as opposed to the RML). Thus, both routed and non-routed transports can simultaneously be active
* all blocking send/recv APIs have been removed. Everything operates asynchronously.
KNOWN LIMITATIONS:
* although provision is made for component failover as described above, the code for doing so has not been fully implemented yet. At the moment, if all connections for a given peer fail, the errmgr is notified of a "lost connection", which by default results in termination of the job if it was a lifeline
* the IPv6 code is present and compiles, but is not complete. Since the current IPv6 support in the OOB doesn't work anyway, I don't consider this a blocker
* routing is performed at the individual module level, yet the active routed component is selected on a global basis. We probably should update that to reflect that different transports may need/choose to route in different ways
* obviously, not every error path has been tested nor necessarily covered
* determining abnormal termination is more challenging than in the old code as we now potentially have multiple ways of connecting to a process. Ideally, we would declare "connection failed" when *all* transports can no longer reach the process, but that requires some additional (possibly complex) code. For now, the code replicates the old behavior only somewhat modified - i.e., if a module sees its connection fail, it checks to see if it is a lifeline. If so, it notifies the errmgr that the lifeline is lost - otherwise, it notifies the errmgr that a non-lifeline connection was lost.
* reachability is determined solely on the basis of a shared subnet address/mask - more sophisticated algorithms (e.g., the one used in the tcp btl) are required to handle routing via gateways
* the RML needs to assign sequence numbers to each message on a per-peer basis. The receiving RML will then deliver messages in order, thus preventing out-of-order messaging in the case where messages travel across different transports or a message needs to be redirected/resent due to failure of a NIC
This commit was SVN r29058.
2013-08-22 20:37:40 +04:00
|
|
|
buffer = OBJ_NEW(opal_buffer_t);
|
|
|
|
ompi_rte_send_buffer_nb(&el_proc, buffer, el_tag+1, NULL, NULL);
|
2008-10-01 22:42:43 +04:00
|
|
|
|
|
|
|
rc = ompi_dpm.connect_accept(MPI_COMM_SELF, 0, port, true, el_comm);
|
|
|
|
if(OMPI_SUCCESS != rc) {
|
2013-01-28 03:25:10 +04:00
|
|
|
OMPI_ERROR_LOG(rc);
|
2008-10-01 22:42:43 +04:00
|
|
|
}
|
2009-03-17 20:35:28 +03:00
|
|
|
|
|
|
|
/* Send Rank, receive max buffer size and max_clock back */
|
|
|
|
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
|
|
|
|
rc = mca_pml_v.host_pml.pml_send(&rank, 1, MPI_INTEGER, 0,
|
|
|
|
VPROTOCOL_PESSIMIST_EVENTLOG_NEW_CLIENT_CMD,
|
|
|
|
MCA_PML_BASE_SEND_STANDARD,
|
|
|
|
mca_vprotocol_pessimist.el_comm);
|
|
|
|
if(OPAL_UNLIKELY(MPI_SUCCESS != rc))
|
|
|
|
OMPI_ERRHANDLER_INVOKE(mca_vprotocol_pessimist.el_comm, rc,
|
|
|
|
__FILE__ ": failed sending event logger handshake");
|
|
|
|
rc = mca_pml_v.host_pml.pml_recv(&connect_info, 2, MPI_UNSIGNED_LONG_LONG,
|
|
|
|
0, VPROTOCOL_PESSIMIST_EVENTLOG_NEW_CLIENT_CMD,
|
|
|
|
mca_vprotocol_pessimist.el_comm, MPI_STATUS_IGNORE);
|
|
|
|
if(OPAL_UNLIKELY(MPI_SUCCESS != rc)) \
|
|
|
|
OMPI_ERRHANDLER_INVOKE(mca_vprotocol_pessimist.el_comm, rc, \
|
|
|
|
__FILE__ ": failed receiving event logger handshake");
|
|
|
|
|
2008-10-01 22:42:43 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
int vprotocol_pessimist_event_logger_disconnect(ompi_communicator_t *el_comm)
|
|
|
|
{
|
|
|
|
ompi_dpm.disconnect(el_comm);
|
|
|
|
return OMPI_SUCCESS;
|
|
|
|
}
|
2008-03-28 00:05:44 +03:00
|
|
|
|
2007-07-31 20:01:32 +04:00
|
|
|
void vprotocol_pessimist_matching_replay(int *src) {
|
2009-05-07 00:11:28 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
2007-07-31 20:01:32 +04:00
|
|
|
vprotocol_pessimist_clock_t max = 0;
|
|
|
|
#endif
|
|
|
|
mca_vprotocol_pessimist_event_t *event;
|
2007-07-21 01:36:11 +04:00
|
|
|
|
2007-07-31 20:01:32 +04:00
|
|
|
/* searching this request in the event list */
|
|
|
|
for(event = (mca_vprotocol_pessimist_event_t *) opal_list_get_first(&mca_vprotocol_pessimist.replay_events);
|
|
|
|
event != (mca_vprotocol_pessimist_event_t *) opal_list_get_end(&mca_vprotocol_pessimist.replay_events);
|
|
|
|
event = (mca_vprotocol_pessimist_event_t *) opal_list_get_next(event))
|
|
|
|
{
|
|
|
|
vprotocol_pessimist_matching_event_t *mevent;
|
|
|
|
|
|
|
|
if(VPROTOCOL_PESSIMIST_EVENT_TYPE_MATCHING != event->type) continue;
|
|
|
|
mevent = &(event->u_event.e_matching);
|
|
|
|
if(mevent->reqid == mca_vprotocol_pessimist.clock)
|
|
|
|
{
|
|
|
|
/* this is the event to replay */
|
2007-07-31 21:12:21 +04:00
|
|
|
V_OUTPUT_VERBOSE(70, "pessimist: replay\tmatch\t%"PRIpclock"\trecv is forced from %d", mevent->reqid, mevent->src);
|
2007-07-31 20:01:32 +04:00
|
|
|
(*src) = mevent->src;
|
|
|
|
opal_list_remove_item(&mca_vprotocol_pessimist.replay_events,
|
|
|
|
(opal_list_item_t *) event);
|
|
|
|
VPESSIMIST_EVENT_RETURN(event);
|
|
|
|
}
|
2009-05-07 00:11:28 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
2007-07-31 20:01:32 +04:00
|
|
|
else if(mevent->reqid > max)
|
|
|
|
max = mevent->reqid;
|
|
|
|
}
|
|
|
|
/* not forcing a ANY SOURCE event whose recieve clock is lower than max
|
|
|
|
* is a bug indicating we have missed an event during logging ! */
|
|
|
|
assert(((*src) != MPI_ANY_SOURCE) || (mca_vprotocol_pessimist.clock > max));
|
|
|
|
#else
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
2007-07-21 01:36:11 +04:00
|
|
|
|
2007-07-31 20:01:32 +04:00
|
|
|
void vprotocol_pessimist_delivery_replay(size_t n, ompi_request_t **reqs,
|
2007-12-07 11:17:30 +03:00
|
|
|
int *outcount, int *index,
|
|
|
|
ompi_status_public_t *status) {
|
2007-07-31 20:01:32 +04:00
|
|
|
mca_vprotocol_pessimist_event_t *event;
|
2007-07-21 01:36:11 +04:00
|
|
|
|
2007-07-31 20:01:32 +04:00
|
|
|
for(event = (mca_vprotocol_pessimist_event_t *) opal_list_get_first(&mca_vprotocol_pessimist.replay_events);
|
|
|
|
event != (mca_vprotocol_pessimist_event_t *) opal_list_get_end(&mca_vprotocol_pessimist.replay_events);
|
|
|
|
event = (mca_vprotocol_pessimist_event_t *) opal_list_get_next(event))
|
2007-07-21 01:36:11 +04:00
|
|
|
{
|
2007-07-31 20:01:32 +04:00
|
|
|
vprotocol_pessimist_delivery_event_t *devent;
|
2007-07-21 01:36:11 +04:00
|
|
|
|
2007-07-31 20:01:32 +04:00
|
|
|
if(VPROTOCOL_PESSIMIST_EVENT_TYPE_DELIVERY != event->type) continue;
|
|
|
|
devent = &(event->u_event.e_delivery);
|
|
|
|
if(devent->probeid < mca_vprotocol_pessimist.clock)
|
|
|
|
{
|
|
|
|
/* this particular test have to return no request completed yet */
|
2007-07-31 21:12:21 +04:00
|
|
|
V_OUTPUT_VERBOSE(70, "pessimist:\treplay\tdeliver\t%"PRIpclock"\tnone", mca_vprotocol_pessimist.clock);
|
2007-07-31 20:01:32 +04:00
|
|
|
*index = MPI_UNDEFINED;
|
2007-12-07 11:17:30 +03:00
|
|
|
*outcount = 0;
|
2007-07-31 20:01:32 +04:00
|
|
|
mca_vprotocol_pessimist.clock++;
|
|
|
|
/* This request have to stay in the queue until probeid matches */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
else if(devent->probeid == mca_vprotocol_pessimist.clock)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for(i = 0; i < (int) n; i++)
|
|
|
|
{
|
2008-03-28 00:05:44 +03:00
|
|
|
if(VPESSIMIST_FTREQ(reqs[i])->reqid == devent->reqid)
|
2007-07-31 20:01:32 +04:00
|
|
|
{
|
2007-07-31 21:12:21 +04:00
|
|
|
V_OUTPUT_VERBOSE(70, "pessimist:\treplay\tdeliver\t%"PRIpclock"\t%"PRIpclock, devent->probeid, devent->reqid);
|
2007-07-31 20:01:32 +04:00
|
|
|
opal_list_remove_item(&mca_vprotocol_pessimist.replay_events,
|
|
|
|
(opal_list_item_t *) event);
|
|
|
|
VPESSIMIST_EVENT_RETURN(event);
|
|
|
|
*index = i;
|
2007-12-07 11:17:30 +03:00
|
|
|
*outcount = 1;
|
2007-07-31 20:01:32 +04:00
|
|
|
mca_vprotocol_pessimist.clock++;
|
|
|
|
ompi_request_wait(&reqs[i], status);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2007-07-31 21:12:21 +04:00
|
|
|
V_OUTPUT_VERBOSE(70, "pessimist:\treplay\tdeliver\t%"PRIpclock"\tnone", mca_vprotocol_pessimist.clock);
|
2007-07-31 20:01:32 +04:00
|
|
|
assert(devent->reqid == 0); /* make sure we don't missed a request */
|
|
|
|
*index = MPI_UNDEFINED;
|
2007-12-07 11:17:30 +03:00
|
|
|
*outcount = 0;
|
2007-07-31 20:01:32 +04:00
|
|
|
mca_vprotocol_pessimist.clock++;
|
|
|
|
opal_list_remove_item(&mca_vprotocol_pessimist.replay_events,
|
|
|
|
(opal_list_item_t *) event);
|
|
|
|
VPESSIMIST_EVENT_RETURN(event);
|
|
|
|
return;
|
|
|
|
}
|
2007-07-21 01:36:11 +04:00
|
|
|
}
|
2007-07-31 21:12:21 +04:00
|
|
|
V_OUTPUT_VERBOSE(50, "pessimist:\treplay\tdeliver\t%"PRIpclock"\tnot forced", mca_vprotocol_pessimist.clock);
|
2007-07-21 01:36:11 +04:00
|
|
|
}
|