2008-05-02 15:52:33 +04:00
|
|
|
/*
|
2013-12-14 01:21:30 +04:00
|
|
|
* Copyright (c) 2008-2013 Cisco Systems, Inc. All rights reserved.
|
2009-07-10 02:13:10 +04:00
|
|
|
* Copyright (c) 2009 Sandia National Laboratories. All rights reserved.
|
2008-05-02 15:52:33 +04:00
|
|
|
*
|
|
|
|
* $COPYRIGHT$
|
2011-07-04 18:00:41 +04:00
|
|
|
*
|
2008-05-02 15:52:33 +04:00
|
|
|
* Additional copyrights may follow
|
2011-07-04 18:00:41 +04:00
|
|
|
*
|
2008-05-02 15:52:33 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/**
|
2009-05-07 00:11:28 +04:00
|
|
|
* Note: this file is a little fast-n-loose with OPAL_HAVE_THREADS --
|
2008-10-06 04:46:02 +04:00
|
|
|
* it uses this value in run-time "if" conditionals (vs. compile-time
|
|
|
|
* #if conditionals). We also don't protect including <pthread.h>.
|
|
|
|
* That's because this component currently only compiles on Linux and
|
|
|
|
* Solaris, and both of these OS's have pthreads. Using the run-time
|
2011-03-19 00:36:35 +03:00
|
|
|
* conditionals gives us better compile-time checking, even of code
|
2008-10-06 04:46:02 +04:00
|
|
|
* that isn't activated.
|
|
|
|
*
|
|
|
|
* Note, too, that the functionality in this file does *not* require
|
|
|
|
* all the heavyweight OMPI thread infrastructure (e.g., from
|
2011-03-19 00:36:35 +03:00
|
|
|
* --enable-mpi-thread-multiple or --enable-progress-threads). All work that
|
2008-10-06 04:46:02 +04:00
|
|
|
* is done in a separate progress thread is very carefully segregated
|
|
|
|
* from that of the main thread, and communication back to the main
|
|
|
|
* thread
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
#include <pthread.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <errno.h>
|
|
|
|
|
|
|
|
#include "opal/class/opal_list.h"
|
Update libevent to the 2.0 series, currently at 2.0.7rc. We will update to their final release when it becomes available. Currently known errors exist in unused portions of the libevent code. This revision passes the IBM test suite on a Linux machine and on a standalone Mac.
This is a fairly intrusive change, but outside of the moving of opal/event to opal/mca/event, the only changes involved (a) changing all calls to opal_event functions to reflect the new framework instead, and (b) ensuring that all opal_event_t objects are properly constructed since they are now true opal_objects.
Note: Shiqing has just returned from vacation and has not yet had a chance to complete the Windows integration. Thus, this commit almost certainly breaks Windows support on the trunk. However, I want this to have a chance to soak for as long as possible before I become less available a week from today (going to be at a class for 5 days, and thus will only be sparingly available) so we can find and fix any problems.
Biggest change is moving the libevent code from opal/event to a new opal/mca/event framework. This was done to make it much easier to update libevent in the future. New versions can be inserted as a new component and tested in parallel with the current version until validated, then we can remove the earlier version if we so choose. This is a statically built framework ala installdirs, so only one component will build at a time. There is no selection logic - the sole compiled component simply loads its function pointers into the opal_event struct.
I have gone thru the code base and converted all the libevent calls I could find. However, I cannot compile nor test every environment. It is therefore quite likely that errors remain in the system. Please keep an eye open for two things:
1. compile-time errors: these will be obvious as calls to the old functions (e.g., opal_evtimer_new) must be replaced by the new framework APIs (e.g., opal_event.evtimer_new)
2. run-time errors: these will likely show up as segfaults due to missing constructors on opal_event_t objects. It appears that it became a typical practice for people to "init" an opal_event_t by simply using memset to zero it out. This will no longer work - you must either OBJ_NEW or OBJ_CONSTRUCT an opal_event_t. I tried to catch these cases, but may have missed some. Believe me, you'll know when you hit it.
There is also the issue of the new libevent "no recursion" behavior. As I described on a recent email, we will have to discuss this and figure out what, if anything, we need to do.
This commit was SVN r23925.
2010-10-24 22:35:54 +04:00
|
|
|
#include "opal/mca/event/event.h"
|
2009-02-14 18:21:28 +03:00
|
|
|
#include "opal/util/output.h"
|
2010-07-20 23:54:17 +04:00
|
|
|
#include "opal/util/fd.h"
|
2013-12-14 01:21:30 +04:00
|
|
|
#include "opal/threads/threads.h"
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
#include "btl_openib_fd.h"
|
|
|
|
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
typedef union {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_fd_event_callback_fn_t *event;
|
|
|
|
opal_btl_openib_fd_main_callback_fn_t *main;
|
2008-10-06 04:46:02 +04:00
|
|
|
} callback_u_t;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/*
|
|
|
|
* Data for each registered item
|
|
|
|
*/
|
|
|
|
typedef struct {
|
|
|
|
opal_list_item_t super;
|
|
|
|
bool ri_event_used;
|
|
|
|
opal_event_t ri_event;
|
|
|
|
int ri_fd;
|
|
|
|
int ri_flags;
|
2008-10-06 04:46:02 +04:00
|
|
|
callback_u_t ri_callback;
|
2008-05-02 15:52:33 +04:00
|
|
|
void *ri_context;
|
|
|
|
} registered_item_t;
|
|
|
|
|
|
|
|
static OBJ_CLASS_INSTANCE(registered_item_t, opal_list_item_t, NULL, NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Command types
|
|
|
|
*/
|
|
|
|
typedef enum {
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Read by service thread */
|
2008-05-02 15:52:33 +04:00
|
|
|
CMD_TIME_TO_QUIT,
|
|
|
|
CMD_ADD_FD,
|
|
|
|
CMD_REMOVE_FD,
|
2008-10-06 04:46:02 +04:00
|
|
|
ACK_RAN_FUNCTION,
|
|
|
|
|
|
|
|
/* Read by service and main threads */
|
|
|
|
CMD_CALL_FUNCTION,
|
2008-05-02 15:52:33 +04:00
|
|
|
CMD_MAX
|
|
|
|
} cmd_type_t;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Commands. Fields ordered to avoid memory holes (and valgrind warnings).
|
|
|
|
*/
|
|
|
|
typedef struct {
|
2008-10-06 04:46:02 +04:00
|
|
|
callback_u_t pc_fn;
|
2008-05-02 15:52:33 +04:00
|
|
|
void *pc_context;
|
|
|
|
int pc_fd;
|
|
|
|
int pc_flags;
|
|
|
|
cmd_type_t pc_cmd;
|
|
|
|
char end;
|
|
|
|
} cmd_t;
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/*
|
|
|
|
* Queued up list of commands to send to the main thread
|
|
|
|
*/
|
|
|
|
typedef struct {
|
|
|
|
opal_list_item_t super;
|
|
|
|
cmd_t cli_cmd;
|
|
|
|
} cmd_list_item_t;
|
|
|
|
|
|
|
|
static OBJ_CLASS_INSTANCE(cmd_list_item_t, opal_list_item_t, NULL, NULL);
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
static bool initialized = false;
|
|
|
|
static int cmd_size = 0;
|
|
|
|
static fd_set read_fds, write_fds;
|
|
|
|
static int max_fd;
|
|
|
|
static opal_list_t registered_items;
|
|
|
|
|
|
|
|
/* These items are only used in the threaded version */
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Owned by the main thread */
|
2008-05-02 15:52:33 +04:00
|
|
|
static pthread_t thread;
|
2008-10-06 04:46:02 +04:00
|
|
|
static opal_event_t main_thread_event;
|
|
|
|
static int pipe_to_service_thread[2] = { -1, -1 };
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Owned by the service thread */
|
|
|
|
static int pipe_to_main_thread[2] = { -1, -1 };
|
|
|
|
static const size_t max_outstanding_to_main_thread = 32;
|
|
|
|
static size_t waiting_for_ack_from_main_thread = 0;
|
|
|
|
static opal_list_t pending_to_main_thread;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Write a command to the main thread, or queue it up if the pipe is full
|
|
|
|
*/
|
|
|
|
static int write_to_main_thread(cmd_t *cmd)
|
|
|
|
{
|
|
|
|
/* Note that if we write too much to the main thread pipe and the
|
|
|
|
main thread doesn't check it often, we could fill up the pipe
|
|
|
|
and cause this thread to block. Bad! So we do some simple
|
|
|
|
counting here and ensure that we don't fill the pipe. If we
|
|
|
|
are in danger of that, then queue up the commands here in the
|
|
|
|
service thread. The main thread will ACK every CALL_FUNCTION
|
|
|
|
command, so we have a built-in mechanism to wake up the service
|
|
|
|
thread to drain any queued-up commands. */
|
|
|
|
if (opal_list_get_size(&pending_to_main_thread) > 0 ||
|
|
|
|
waiting_for_ack_from_main_thread >= max_outstanding_to_main_thread) {
|
|
|
|
cmd_list_item_t *cli = OBJ_NEW(cmd_list_item_t);
|
|
|
|
if (NULL == cli) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2008-10-06 04:46:02 +04:00
|
|
|
}
|
|
|
|
memcpy(&cli->cli_cmd, cmd, cmd_size);
|
|
|
|
opal_list_append(&pending_to_main_thread, &(cli->super));
|
|
|
|
} else {
|
|
|
|
OPAL_OUTPUT((-1, "fd: writing to main thread"));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_main_thread[1], cmd_size, cmd);
|
2008-10-06 04:46:02 +04:00
|
|
|
++waiting_for_ack_from_main_thread;
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-10-06 04:46:02 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void service_fd_callback(int fd, short event, void *context)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
registered_item_t *ri = (registered_item_t*) context;
|
2008-10-06 04:46:02 +04:00
|
|
|
ri->ri_callback.event(fd, event, ri->ri_context);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add an fd to the listening set
|
|
|
|
*/
|
2008-10-06 04:46:02 +04:00
|
|
|
static int service_pipe_cmd_add_fd(bool use_libevent, cmd_t *cmd)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
registered_item_t *ri = OBJ_NEW(registered_item_t);
|
|
|
|
if (NULL == ri) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
ri->ri_event_used = false;
|
|
|
|
ri->ri_fd = cmd->pc_fd;
|
|
|
|
ri->ri_flags = cmd->pc_flags;
|
2008-10-06 04:46:02 +04:00
|
|
|
ri->ri_callback.event = cmd->pc_fn.event;
|
2008-05-02 15:52:33 +04:00
|
|
|
ri->ri_context = cmd->pc_context;
|
|
|
|
|
|
|
|
if (use_libevent) {
|
|
|
|
/* Make an event for this fd */
|
|
|
|
ri->ri_event_used = true;
|
2011-07-04 18:00:41 +04:00
|
|
|
opal_event_set(opal_event_base, &ri->ri_event, ri->ri_fd,
|
2008-10-06 04:46:02 +04:00
|
|
|
ri->ri_flags | OPAL_EV_PERSIST, service_fd_callback,
|
2008-05-02 15:52:33 +04:00
|
|
|
ri);
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_add(&ri->ri_event, 0);
|
2008-05-02 15:52:33 +04:00
|
|
|
} else {
|
|
|
|
/* Add the fd to the relevant fd local sets and update max_fd */
|
|
|
|
if (OPAL_EV_READ & ri->ri_flags) {
|
|
|
|
FD_SET(ri->ri_fd, &read_fds);
|
|
|
|
}
|
|
|
|
if (OPAL_EV_WRITE & cmd->pc_flags) {
|
|
|
|
FD_SET(ri->ri_fd, &write_fds);
|
|
|
|
}
|
|
|
|
max_fd = (max_fd > ri->ri_fd) ? max_fd : ri->ri_fd + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
opal_list_append(®istered_items, &ri->super);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/*
|
|
|
|
* Run a function
|
|
|
|
*/
|
|
|
|
static int service_pipe_cmd_call_function(cmd_t *cmd)
|
|
|
|
{
|
|
|
|
cmd_t local_cmd;
|
|
|
|
|
|
|
|
OPAL_OUTPUT((-1, "fd service thread: calling function!"));
|
|
|
|
/* Call the function */
|
|
|
|
if (NULL != cmd->pc_fn.main) {
|
|
|
|
cmd->pc_fn.main(cmd->pc_context);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now ACK that we ran the function */
|
|
|
|
memset(&local_cmd, 0, cmd_size);
|
|
|
|
local_cmd.pc_cmd = ACK_RAN_FUNCTION;
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_main_thread[1], cmd_size, &local_cmd);
|
2008-10-06 04:46:02 +04:00
|
|
|
|
|
|
|
/* Done */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-10-06 04:46:02 +04:00
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove an fd from the listening set
|
|
|
|
*/
|
2008-10-06 04:46:02 +04:00
|
|
|
static int service_pipe_cmd_remove_fd(cmd_t *cmd)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
opal_list_item_t *item;
|
|
|
|
registered_item_t *ri;
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "service thread got unmonitor fd %d", cmd->pc_fd));
|
2008-05-02 15:52:33 +04:00
|
|
|
/* Go through the list of registered fd's and find the fd to
|
|
|
|
remove */
|
|
|
|
for (item = opal_list_get_first(®istered_items);
|
|
|
|
NULL != opal_list_get_end(®istered_items);
|
|
|
|
item = opal_list_get_next(item)) {
|
|
|
|
ri = (registered_item_t*) item;
|
|
|
|
if (cmd->pc_fd == ri->ri_fd) {
|
|
|
|
/* Found it. The item knows if it was used as a libevent
|
|
|
|
event or an entry in the local fd sets. */
|
|
|
|
if (ri->ri_event_used) {
|
|
|
|
/* Remove this event from libevent */
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_del(&ri->ri_event);
|
2008-05-02 15:52:33 +04:00
|
|
|
} else {
|
|
|
|
/* Remove this item from the fd_sets and recalculate
|
2008-10-06 04:46:02 +04:00
|
|
|
MAX_FD */
|
2008-05-02 15:52:33 +04:00
|
|
|
FD_CLR(cmd->pc_fd, &read_fds);
|
|
|
|
FD_CLR(cmd->pc_fd, &write_fds);
|
2008-10-06 04:46:02 +04:00
|
|
|
for (max_fd = i = pipe_to_service_thread[0]; i < FD_SETSIZE; ++i) {
|
2008-05-02 15:52:33 +04:00
|
|
|
if (FD_ISSET(i, &read_fds) || FD_ISSET(i, &write_fds)) {
|
|
|
|
max_fd = i + 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/* Let the caller know that we have stopped monitoring
|
|
|
|
this fd (if they care) */
|
2008-10-06 04:46:02 +04:00
|
|
|
if (NULL != cmd->pc_fn.event) {
|
|
|
|
cmd->pc_fn.event(cmd->pc_fd, 0, cmd->pc_context);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/* Remove this item from the list of registered items and
|
|
|
|
release it */
|
|
|
|
opal_list_remove_item(®istered_items, item);
|
|
|
|
OBJ_RELEASE(item);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This shouldn't happen */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_NOT_FOUND;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/*
|
2008-10-06 04:46:02 +04:00
|
|
|
* Call a function and ACK that we ran it
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
2008-10-06 04:46:02 +04:00
|
|
|
static int main_pipe_cmd_call_function(cmd_t *cmd)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd_t local_cmd;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd main thread: calling function!"));
|
|
|
|
/* Call the function */
|
|
|
|
if (NULL != cmd->pc_fn.main) {
|
|
|
|
cmd->pc_fn.main(cmd->pc_context);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Now ACK that we ran the function */
|
|
|
|
memset(&local_cmd, 0, cmd_size);
|
|
|
|
local_cmd.pc_cmd = ACK_RAN_FUNCTION;
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_service_thread[1], cmd_size, &local_cmd);
|
2008-09-03 12:45:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Done */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Act on pipe commands
|
|
|
|
*/
|
2008-10-06 04:46:02 +04:00
|
|
|
static bool service_pipe_cmd(void)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
bool ret = false;
|
|
|
|
cmd_t cmd;
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd_list_item_t *cli;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_read(pipe_to_service_thread[0], cmd_size, &cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
switch (cmd.pc_cmd) {
|
|
|
|
case CMD_ADD_FD:
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: CMD_ADD_FD"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != service_pipe_cmd_add_fd(false, &cmd)) {
|
2008-05-02 15:52:33 +04:00
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case CMD_REMOVE_FD:
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: CMD_REMOVE_FD"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != service_pipe_cmd_remove_fd(&cmd)) {
|
2008-10-06 04:46:02 +04:00
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case CMD_CALL_FUNCTION:
|
|
|
|
OPAL_OUTPUT((-1, "fd service thread: CMD_RUN_FUNCTION"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != service_pipe_cmd_call_function(&cmd)) {
|
2008-05-02 15:52:33 +04:00
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case CMD_TIME_TO_QUIT:
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: CMD_TIME_TO_QUIT"));
|
2008-05-02 15:52:33 +04:00
|
|
|
ret = true;
|
|
|
|
break;
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
case ACK_RAN_FUNCTION:
|
|
|
|
/* We don't have a guarantee that the main thread will check
|
|
|
|
its pipe frequently, so we do some simple counting to
|
|
|
|
ensure we just don't have too many outstanding commands to
|
|
|
|
the main thread at any given time. The main thread will
|
|
|
|
ACK every CALL_FUNCTION command, so this thread will always
|
|
|
|
wake up and continue to drain any queued up functions. */
|
|
|
|
cli = (cmd_list_item_t*) opal_list_remove_first(&pending_to_main_thread);
|
|
|
|
if (NULL != cli) {
|
|
|
|
OPAL_OUTPUT((-1, "sending queued up cmd function to main thread"));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_main_thread[1], cmd_size, &(cli->cli_cmd));
|
2008-10-06 04:46:02 +04:00
|
|
|
OBJ_RELEASE(cli);
|
|
|
|
} else {
|
|
|
|
--waiting_for_ack_from_main_thread;
|
|
|
|
}
|
|
|
|
break;
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
default:
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: unknown pipe command!"));
|
2008-05-02 15:52:33 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Main thread logic
|
|
|
|
*/
|
2008-10-06 04:46:02 +04:00
|
|
|
static void *service_thread_start(void *context)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
int rc, flags;
|
|
|
|
fd_set read_fds_copy, write_fds_copy;
|
|
|
|
opal_list_item_t *item;
|
|
|
|
registered_item_t *ri;
|
|
|
|
|
|
|
|
/* Make an fd set that we can select() on */
|
|
|
|
FD_ZERO(&write_fds);
|
|
|
|
FD_ZERO(&read_fds);
|
2008-10-06 04:46:02 +04:00
|
|
|
FD_SET(pipe_to_service_thread[0], &read_fds);
|
|
|
|
max_fd = pipe_to_service_thread[0] + 1;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread running"));
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/* Main loop waiting for commands over the fd's */
|
|
|
|
while (1) {
|
|
|
|
memcpy(&read_fds_copy, &read_fds, sizeof(read_fds));
|
|
|
|
memcpy(&write_fds_copy, &write_fds, sizeof(write_fds));
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread blocking on select..."));
|
2008-05-02 15:52:33 +04:00
|
|
|
rc = select(max_fd, &read_fds_copy, &write_fds_copy, NULL, NULL);
|
|
|
|
if (0 != rc && EAGAIN == errno) {
|
|
|
|
continue;
|
|
|
|
}
|
2011-11-29 20:49:59 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread woke up!"));
|
2011-11-29 20:49:59 +04:00
|
|
|
|
|
|
|
if (0 > rc) {
|
|
|
|
if (EBADF == errno) {
|
|
|
|
/* We are assuming we lost a socket so set rc to 1 so we'll
|
|
|
|
* try to read a command off the service pipe to receive a
|
|
|
|
* rm command (corresponding to the socket that went away).
|
|
|
|
* If the EBADF is from the service pipe then the error
|
|
|
|
* condition will be handled by the service_pipe_cmd().
|
|
|
|
*/
|
|
|
|
OPAL_OUTPUT((-1,"fd service thread: non-EAGAIN from select %d", errno));
|
|
|
|
rc = 1;
|
|
|
|
}
|
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
if (rc > 0) {
|
2008-10-06 04:46:02 +04:00
|
|
|
if (FD_ISSET(pipe_to_service_thread[0], &read_fds_copy)) {
|
|
|
|
OPAL_OUTPUT((-1, "fd service thread: pipe command"));
|
|
|
|
if (service_pipe_cmd()) {
|
2008-05-02 15:52:33 +04:00
|
|
|
break;
|
|
|
|
}
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: back from pipe command"));
|
2011-11-29 20:49:59 +04:00
|
|
|
/* Continue to the top of the loop to see if there are more
|
|
|
|
* commands on the pipe. This is done to reset the fds
|
|
|
|
* list just in case the last select incurred an EBADF.
|
|
|
|
* Please do not remove this continue thinking one is trying
|
|
|
|
* to enforce a fairness of reading the sockets or we'll
|
|
|
|
* end up with segv's below when select incurs an EBADF.
|
|
|
|
*/
|
|
|
|
continue;
|
2011-07-04 18:00:41 +04:00
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/* Go through all the registered events and see who had
|
|
|
|
activity */
|
|
|
|
if (!opal_list_is_empty(®istered_items)) {
|
|
|
|
for (item = opal_list_get_first(®istered_items);
|
|
|
|
item != opal_list_get_end(®istered_items);
|
|
|
|
item = opal_list_get_next(item)) {
|
|
|
|
ri = (registered_item_t*) item;
|
|
|
|
flags = 0;
|
|
|
|
|
|
|
|
/* See if this fd was ready for reading or writing
|
|
|
|
(fd's will only be in the read_fds or write_fds
|
|
|
|
set depending on what they registered for) */
|
|
|
|
if (FD_ISSET(ri->ri_fd, &read_fds_copy)) {
|
|
|
|
flags |= OPAL_EV_READ;
|
|
|
|
}
|
|
|
|
if (FD_ISSET(ri->ri_fd, &write_fds_copy)) {
|
|
|
|
flags |= OPAL_EV_WRITE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If either was ready, invoke the callback */
|
|
|
|
if (0 != flags) {
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: invoking callback for registered fd %d", ri->ri_fd));
|
2011-07-04 18:00:41 +04:00
|
|
|
ri->ri_callback.event(ri->ri_fd, flags,
|
2008-10-06 04:46:02 +04:00
|
|
|
ri->ri_context);
|
|
|
|
OPAL_OUTPUT((-1, "fd service thread: back from callback for registered fd %d", ri->ri_fd));
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* All done */
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd service thread: exiting"));
|
|
|
|
opal_atomic_wmb();
|
2008-05-02 15:52:33 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
static void main_thread_event_callback(int fd, short event, void *context)
|
|
|
|
{
|
|
|
|
cmd_t cmd;
|
|
|
|
|
|
|
|
OPAL_OUTPUT((-1, "main thread -- reading command"));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_read(pipe_to_main_thread[0], cmd_size, &cmd);
|
2008-10-06 04:46:02 +04:00
|
|
|
switch (cmd.pc_cmd) {
|
|
|
|
case CMD_CALL_FUNCTION:
|
|
|
|
OPAL_OUTPUT((-1, "fd main thread: calling command"));
|
|
|
|
main_pipe_cmd_call_function(&cmd);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
2011-07-04 18:00:41 +04:00
|
|
|
OPAL_OUTPUT((-1, "fd main thread: unknown pipe command: %d",
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd.pc_cmd));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/******************************************************************
|
|
|
|
* Main interface calls
|
|
|
|
******************************************************************/
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/*
|
|
|
|
* Initialize
|
2008-10-06 04:46:02 +04:00
|
|
|
* Called by main thread
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_init(void)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
if (!initialized) {
|
|
|
|
cmd_t bogus;
|
|
|
|
|
|
|
|
OBJ_CONSTRUCT(®istered_items, opal_list_t);
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Calculate the real size of the cmd struct */
|
|
|
|
cmd_size = (int) (&(bogus.end) - ((char*) &bogus));
|
|
|
|
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-10-06 04:46:02 +04:00
|
|
|
OBJ_CONSTRUCT(&pending_to_main_thread, opal_list_t);
|
|
|
|
|
|
|
|
/* Create pipes to communicate between the two threads */
|
|
|
|
if (0 != pipe(pipe_to_service_thread)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_IN_ERRNO;
|
2008-10-06 04:46:02 +04:00
|
|
|
}
|
|
|
|
if (0 != pipe(pipe_to_main_thread)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_IN_ERRNO;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Create a libevent event that is used in the main thread
|
|
|
|
to watch its pipe */
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_set(opal_event_base, &main_thread_event, pipe_to_main_thread[0],
|
2011-07-04 18:00:41 +04:00
|
|
|
OPAL_EV_READ | OPAL_EV_PERSIST,
|
2008-10-06 04:46:02 +04:00
|
|
|
main_thread_event_callback, NULL);
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_add(&main_thread_event, 0);
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Start the service thread */
|
2011-07-04 18:00:41 +04:00
|
|
|
if (0 != pthread_create(&thread, NULL, service_thread_start,
|
2008-10-06 04:46:02 +04:00
|
|
|
NULL)) {
|
|
|
|
int errno_save = errno;
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_del(&main_thread_event);
|
2008-10-06 04:46:02 +04:00
|
|
|
close(pipe_to_service_thread[0]);
|
|
|
|
close(pipe_to_service_thread[1]);
|
|
|
|
close(pipe_to_main_thread[0]);
|
|
|
|
close(pipe_to_main_thread[1]);
|
|
|
|
errno = errno_save;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_IN_ERRNO;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
initialized = true;
|
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start monitoring an fd
|
2008-10-06 04:46:02 +04:00
|
|
|
* Called by main or service thread; callback will be in service thread
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_monitor(int fd, int flags,
|
|
|
|
opal_btl_openib_fd_event_callback_fn_t *callback,
|
2008-05-02 15:52:33 +04:00
|
|
|
void *context)
|
|
|
|
{
|
|
|
|
cmd_t cmd;
|
|
|
|
|
|
|
|
/* Sanity check */
|
|
|
|
if (fd < 0 || 0 == flags || NULL == callback) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_BAD_PARAM;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
cmd.pc_cmd = CMD_ADD_FD;
|
|
|
|
cmd.pc_fd = fd;
|
|
|
|
cmd.pc_flags = flags;
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd.pc_fn.event = callback;
|
2008-05-02 15:52:33 +04:00
|
|
|
cmd.pc_context = context;
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-05-02 15:52:33 +04:00
|
|
|
/* For the threaded version, write a command down the pipe */
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "main thread sending monitor fd %d", fd));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_service_thread[1], cmd_size, &cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
} else {
|
|
|
|
/* Otherwise, add it directly */
|
2008-10-06 04:46:02 +04:00
|
|
|
service_pipe_cmd_add_fd(true, &cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Stop monitoring an fd
|
2008-10-06 04:46:02 +04:00
|
|
|
* Called by main or service thread; callback will be in service thread
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_unmonitor(int fd,
|
|
|
|
opal_btl_openib_fd_event_callback_fn_t *callback,
|
2008-05-02 15:52:33 +04:00
|
|
|
void *context)
|
|
|
|
{
|
|
|
|
cmd_t cmd;
|
|
|
|
|
|
|
|
/* Sanity check */
|
|
|
|
if (fd < 0) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_BAD_PARAM;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
cmd.pc_cmd = CMD_REMOVE_FD;
|
|
|
|
cmd.pc_fd = fd;
|
|
|
|
cmd.pc_flags = 0;
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd.pc_fn.event = callback;
|
2008-05-02 15:52:33 +04:00
|
|
|
cmd.pc_context = context;
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-05-02 15:52:33 +04:00
|
|
|
/* For the threaded version, write a command down the pipe */
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "main thread sending unmonitor fd %d", fd));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_service_thread[1], cmd_size, &cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
} else {
|
|
|
|
/* Otherwise, remove it directly */
|
2008-10-06 04:46:02 +04:00
|
|
|
service_pipe_cmd_remove_fd(&cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-10-06 04:46:02 +04:00
|
|
|
* Run in the service thread
|
|
|
|
* Called by main thread; callback will be in service thread
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_run_in_service(opal_btl_openib_fd_main_callback_fn_t *callback,
|
2008-10-06 04:46:02 +04:00
|
|
|
void *context)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd_t cmd;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd.pc_cmd = CMD_CALL_FUNCTION;
|
|
|
|
cmd.pc_fd = -1;
|
|
|
|
cmd.pc_flags = 0;
|
|
|
|
cmd.pc_fn.main = callback;
|
|
|
|
cmd.pc_context = context;
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-10-06 04:46:02 +04:00
|
|
|
/* For the threaded version, write a command down the pipe */
|
|
|
|
OPAL_OUTPUT((-1, "main thread sending 'run in service'"));
|
2010-07-20 23:54:17 +04:00
|
|
|
opal_fd_write(pipe_to_service_thread[1], cmd_size, &cmd);
|
2008-10-06 04:46:02 +04:00
|
|
|
} else {
|
|
|
|
/* Otherwise, run it directly */
|
|
|
|
callback(context);
|
|
|
|
}
|
2008-09-03 12:45:33 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-10-06 04:46:02 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Run a function in the main thread
|
|
|
|
* Called by service thread
|
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_run_in_main(opal_btl_openib_fd_main_callback_fn_t *callback,
|
2008-10-06 04:46:02 +04:00
|
|
|
void *context)
|
|
|
|
{
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-10-06 04:46:02 +04:00
|
|
|
cmd_t cmd;
|
|
|
|
|
|
|
|
OPAL_OUTPUT((-1, "run in main -- sending command"));
|
|
|
|
/* For the threaded version, write a command down the pipe */
|
|
|
|
cmd.pc_cmd = CMD_CALL_FUNCTION;
|
|
|
|
cmd.pc_fd = -1;
|
|
|
|
cmd.pc_flags = 0;
|
|
|
|
cmd.pc_fn.main = callback;
|
|
|
|
cmd.pc_context = context;
|
|
|
|
write_to_main_thread(&cmd);
|
2008-05-02 15:52:33 +04:00
|
|
|
} else {
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Otherwise, call it directly */
|
|
|
|
OPAL_OUTPUT((-1, "run in main -- calling now!"));
|
2008-05-02 15:52:33 +04:00
|
|
|
callback(context);
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
|
2009-07-10 02:13:10 +04:00
|
|
|
|
|
|
|
int
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_fd_main_thread_drain(void)
|
2009-07-10 02:13:10 +04:00
|
|
|
{
|
|
|
|
int nfds, ret;
|
|
|
|
fd_set rfds;
|
|
|
|
struct timeval tv;
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2009-07-10 02:13:10 +04:00
|
|
|
while (1) {
|
|
|
|
FD_ZERO(&rfds);
|
|
|
|
FD_SET(pipe_to_main_thread[0], &rfds);
|
|
|
|
nfds = pipe_to_main_thread[0] + 1;
|
|
|
|
|
|
|
|
tv.tv_sec = 0;
|
|
|
|
tv.tv_usec = 0;
|
|
|
|
|
|
|
|
ret = select(nfds, &rfds, NULL, NULL, &tv);
|
|
|
|
if (ret > 0) {
|
|
|
|
main_thread_event_callback(pipe_to_main_thread[0], 0, NULL);
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/*
|
|
|
|
* Finalize
|
2008-10-06 04:46:02 +04:00
|
|
|
* Called by main thread
|
2008-05-02 15:52:33 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int opal_btl_openib_fd_finalize(void)
|
2008-05-02 15:52:33 +04:00
|
|
|
{
|
|
|
|
if (initialized) {
|
2009-05-07 00:11:28 +04:00
|
|
|
if (OPAL_HAVE_THREADS) {
|
2008-05-02 15:52:33 +04:00
|
|
|
/* For the threaded version, send a command down the pipe */
|
|
|
|
cmd_t cmd;
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "shutting down openib fd"));
|
2012-03-04 10:57:33 +04:00
|
|
|
/* Check if the thread exists before asking it to quit */
|
|
|
|
if (ESRCH != pthread_kill(thread, 0)) {
|
|
|
|
memset(&cmd, 0, cmd_size);
|
|
|
|
cmd.pc_cmd = CMD_TIME_TO_QUIT;
|
|
|
|
if (OPAL_SUCCESS != opal_fd_write(pipe_to_service_thread[1],
|
|
|
|
cmd_size, &cmd)) {
|
|
|
|
/* We cancel the thread if there's an error
|
|
|
|
* sending the "quit" cmd. This only ever happens on
|
|
|
|
* a "restart" which could result in dangling
|
|
|
|
* fds. OMPI must not rely on the checkpointer to
|
|
|
|
* save/restore any fds or connections
|
|
|
|
*/
|
|
|
|
pthread_cancel(thread);
|
|
|
|
}
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2012-03-04 10:57:33 +04:00
|
|
|
pthread_join(thread, NULL);
|
|
|
|
opal_atomic_rmb();
|
|
|
|
}
|
2011-07-04 18:00:41 +04:00
|
|
|
|
2010-10-28 19:22:46 +04:00
|
|
|
opal_event_del(&main_thread_event);
|
Update libevent to the 2.0 series, currently at 2.0.7rc. We will update to their final release when it becomes available. Currently known errors exist in unused portions of the libevent code. This revision passes the IBM test suite on a Linux machine and on a standalone Mac.
This is a fairly intrusive change, but outside of the moving of opal/event to opal/mca/event, the only changes involved (a) changing all calls to opal_event functions to reflect the new framework instead, and (b) ensuring that all opal_event_t objects are properly constructed since they are now true opal_objects.
Note: Shiqing has just returned from vacation and has not yet had a chance to complete the Windows integration. Thus, this commit almost certainly breaks Windows support on the trunk. However, I want this to have a chance to soak for as long as possible before I become less available a week from today (going to be at a class for 5 days, and thus will only be sparingly available) so we can find and fix any problems.
Biggest change is moving the libevent code from opal/event to a new opal/mca/event framework. This was done to make it much easier to update libevent in the future. New versions can be inserted as a new component and tested in parallel with the current version until validated, then we can remove the earlier version if we so choose. This is a statically built framework ala installdirs, so only one component will build at a time. There is no selection logic - the sole compiled component simply loads its function pointers into the opal_event struct.
I have gone thru the code base and converted all the libevent calls I could find. However, I cannot compile nor test every environment. It is therefore quite likely that errors remain in the system. Please keep an eye open for two things:
1. compile-time errors: these will be obvious as calls to the old functions (e.g., opal_evtimer_new) must be replaced by the new framework APIs (e.g., opal_event.evtimer_new)
2. run-time errors: these will likely show up as segfaults due to missing constructors on opal_event_t objects. It appears that it became a typical practice for people to "init" an opal_event_t by simply using memset to zero it out. This will no longer work - you must either OBJ_NEW or OBJ_CONSTRUCT an opal_event_t. I tried to catch these cases, but may have missed some. Believe me, you'll know when you hit it.
There is also the issue of the new libevent "no recursion" behavior. As I described on a recent email, we will have to discuss this and figure out what, if anything, we need to do.
This commit was SVN r23925.
2010-10-24 22:35:54 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
close(pipe_to_service_thread[0]);
|
|
|
|
close(pipe_to_service_thread[1]);
|
|
|
|
close(pipe_to_main_thread[0]);
|
|
|
|
close(pipe_to_main_thread[1]);
|
|
|
|
OBJ_DESTRUCT(&pending_to_main_thread);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
2008-10-06 04:46:02 +04:00
|
|
|
OBJ_DESTRUCT(®istered_items);
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
|
|
|
initialized = false;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|