2014-07-10 20:31:15 +04:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2013-07-20 02:13:58 +04:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004-2008 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2011 The University of Tennessee and The University
|
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
* Copyright (c) 2006 Sandia National Laboratories. All rights
|
|
|
|
* reserved.
|
2014-02-18 01:37:13 +04:00
|
|
|
* Copyright (c) 2008-2014 Cisco Systems, Inc. All rights reserved.
|
2014-07-10 20:31:15 +04:00
|
|
|
* Copyright (c) 2012-2014 Los Alamos National Security, LLC. All rights
|
2014-07-31 00:56:15 +04:00
|
|
|
* reserved.
|
2013-07-20 02:13:58 +04:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* General notes:
|
|
|
|
*
|
|
|
|
* - OB1 handles out of order receives
|
|
|
|
* - OB1 does NOT handle duplicate receives well (it probably does for
|
|
|
|
* MATCH tags, but for non-MATCH tags, it doesn't have enough info
|
|
|
|
* to know when duplicates are received), so we have to ensure not
|
|
|
|
* to pass duplicates up to the PML.
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
#include <string.h>
|
|
|
|
#include <ctype.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <infiniband/verbs.h>
|
|
|
|
#include <unistd.h>
|
2013-08-01 20:56:15 +04:00
|
|
|
#include <stdlib.h>
|
2013-07-20 02:13:58 +04:00
|
|
|
#include <sys/time.h>
|
|
|
|
#include <sys/resource.h>
|
2013-07-24 04:38:32 +04:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <fcntl.h>
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
#include "opal_stdint.h"
|
|
|
|
#include "opal/prefetch.h"
|
|
|
|
#include "opal/mca/timer/base/base.h"
|
|
|
|
#include "opal/util/argv.h"
|
|
|
|
#include "opal/util/net.h"
|
|
|
|
#include "opal/util/if.h"
|
|
|
|
#include "opal/mca/base/mca_base_var.h"
|
|
|
|
#include "opal/mca/memchecker/base/base.h"
|
2013-07-22 21:28:23 +04:00
|
|
|
#include "opal/util/show_help.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/constants.h"
|
|
|
|
#include "opal/mca/btl/btl.h"
|
|
|
|
#include "opal/mca/btl/base/base.h"
|
|
|
|
#include "opal/util/proc.h"
|
|
|
|
#include "opal/mca/common/verbs/common_verbs.h"
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
#include "btl_usnic.h"
|
2014-02-27 02:21:25 +04:00
|
|
|
#include "btl_usnic_connectivity.h"
|
2013-07-20 02:13:58 +04:00
|
|
|
#include "btl_usnic_frag.h"
|
|
|
|
#include "btl_usnic_endpoint.h"
|
|
|
|
#include "btl_usnic_module.h"
|
Move all usNIC stats to _stats.c|h and export them as MPI_T pvars.
This commit moves all the module stats into their own struct so that
the stats only need to appear as a single line in the module_t
definition, and then moves all the logic for reporting the stats into
btl_usnic_stats.c|h.
Further, the stats are now exported as MPI_T_BIND_NO_OBJECT entities
(i.e., not bound to any particular MPI handle), and are marked as
READONLY and CONTINUOUS. They currently all default to verbose level
5 ("Application tuner / detailed", according to
https://svn.open-mpi.org/trac/ompi/wiki/MCAParamLevels).
Most of the statistics are counters, but a small number are high
watermark values. Due to how counters are reported via MPI_T, none of
the counters are exported through MPI_T if the MCA param
btl_usnic_stats_relative=1 (i.e., the module resets the stats back to
zero at a given frequency).
When MPI_T_pvar_handle_alloc() is invoked on any of these pvars, it
will return a count that is equal to the number of active usnic BTL
modules. The values returned for any given pvar (e.g.,
num_total_sends) are an array containing one value for each active
usnic BTL module. The ordering of values in the array is both
consistent across all usnic pvars and stable throughout a single job:
array slot 0 corresponds to module X, array slot 1 corresponds to
module Y, etc.
Mapping which array slot corresponds to which underlying Linux usnic_X
device works as follows:
* The btl_usnic_devices MPI_T state pvar is associated with a
btl_usnic_device MPI_T enum, and be obtained via
MPI_T_pvar_get_info().
* If all usNIC pvars are of length N, the values [0,N) in the
btl_usnic_device enum are associated with strings of the
corresponding underlying Linux device.
For exampe, to look up which Linux device is reported in all usNIC
pvars' array slot 1, look up the int value 1 in the btl_usnic_devices
enum. Its corresponding string value is underlying Linux device name
(e.g., "usnic_1").
cmr=v1.7.4:subject="usnic BTL MPI_T pvars"
This commit was SVN r29545.
2013-10-29 02:23:08 +04:00
|
|
|
#include "btl_usnic_stats.h"
|
2013-07-20 02:13:58 +04:00
|
|
|
#include "btl_usnic_util.h"
|
|
|
|
#include "btl_usnic_ack.h"
|
|
|
|
#include "btl_usnic_send.h"
|
|
|
|
#include "btl_usnic_recv.h"
|
|
|
|
#include "btl_usnic_proc.h"
|
2014-03-04 01:31:42 +04:00
|
|
|
#include "btl_usnic_ext.h"
|
2014-02-26 11:47:50 +04:00
|
|
|
#include "btl_usnic_test.h"
|
2013-07-20 02:13:58 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#define OPAL_BTL_USNIC_NUM_WC 500
|
2013-07-20 02:13:58 +04:00
|
|
|
#define max(a,b) ((a) > (b) ? (a) : (b))
|
|
|
|
|
2014-02-24 01:41:38 +04:00
|
|
|
/* RNG buffer definition */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_rng_buff_t opal_btl_usnic_rand_buff;
|
2014-02-24 01:41:38 +04:00
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* simulated clock */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
uint64_t opal_btl_usnic_ticks = 0;
|
2014-02-24 21:47:52 +04:00
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
static opal_event_t usnic_clock_timer_event;
|
|
|
|
static bool usnic_clock_timer_event_set = false;
|
|
|
|
static struct timeval usnic_clock_timeout;
|
|
|
|
|
2013-10-23 19:51:11 +04:00
|
|
|
/* set to true in a debugger to enable even more verbose output when calling
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* opal_btl_usnic_component_debug */
|
2013-10-23 19:51:11 +04:00
|
|
|
static volatile bool dump_bitvectors = false;
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
static int usnic_component_open(void);
|
|
|
|
static int usnic_component_close(void);
|
|
|
|
static mca_btl_base_module_t **
|
|
|
|
usnic_component_init(int* num_btl_modules, bool want_progress_threads,
|
|
|
|
bool want_mpi_threads);
|
|
|
|
static int usnic_component_progress(void);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static int init_module_from_port(opal_btl_usnic_module_t *module,
|
|
|
|
opal_common_verbs_port_item_t *port);
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Types for filtering interfaces */
|
|
|
|
typedef struct filter_elt_t {
|
|
|
|
bool is_netmask;
|
|
|
|
|
|
|
|
/* valid iff is_netmask==false */
|
|
|
|
char *if_name;
|
|
|
|
|
|
|
|
/* valid iff is_netmask==true */
|
|
|
|
uint32_t addr; /* in network byte order */
|
|
|
|
uint32_t prefixlen;
|
|
|
|
} filter_elt_t;
|
|
|
|
|
|
|
|
typedef struct usnic_if_filter_t {
|
|
|
|
int n_elt;
|
|
|
|
filter_elt_t *elts;
|
|
|
|
} usnic_if_filter_t;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static bool filter_module(opal_btl_usnic_module_t *module,
|
2013-07-20 02:13:58 +04:00
|
|
|
usnic_if_filter_t *filter,
|
|
|
|
bool filter_incl);
|
|
|
|
static usnic_if_filter_t *parse_ifex_str(const char *orig_str,
|
|
|
|
const char *name);
|
|
|
|
static void free_filter(usnic_if_filter_t *filter);
|
|
|
|
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_component_t mca_btl_usnic_component = {
|
2013-07-20 02:13:58 +04:00
|
|
|
{
|
|
|
|
/* First, the mca_base_component_t struct containing meta information
|
|
|
|
about the component itself */
|
2014-07-10 20:31:15 +04:00
|
|
|
.btl_version = {
|
|
|
|
MCA_BTL_DEFAULT_VERSION("usnic"),
|
|
|
|
.mca_open_component = usnic_component_open,
|
|
|
|
.mca_close_component = usnic_component_close,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
.mca_register_component_params = opal_btl_usnic_component_register,
|
2013-07-20 02:13:58 +04:00
|
|
|
},
|
2014-07-10 20:31:15 +04:00
|
|
|
.btl_data = {
|
2013-07-20 02:13:58 +04:00
|
|
|
/* The component is not checkpoint ready */
|
2014-07-10 20:31:15 +04:00
|
|
|
.param_field = MCA_BASE_METADATA_PARAM_NONE
|
2013-07-20 02:13:58 +04:00
|
|
|
},
|
|
|
|
|
2014-07-10 20:31:15 +04:00
|
|
|
.btl_init = usnic_component_init,
|
|
|
|
.btl_progress = usnic_component_progress,
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called by MCA framework to open the component
|
|
|
|
*/
|
|
|
|
static int usnic_component_open(void)
|
|
|
|
{
|
|
|
|
/* initialize state */
|
|
|
|
mca_btl_usnic_component.num_modules = 0;
|
2013-08-01 20:56:15 +04:00
|
|
|
mca_btl_usnic_component.usnic_all_modules = NULL;
|
|
|
|
mca_btl_usnic_component.usnic_active_modules = NULL;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* In this version, the USNIC stack does not support having more
|
|
|
|
* than one GID. So just hard-wire this value to 0. */
|
|
|
|
mca_btl_usnic_component.gid_index = 0;
|
2014-07-31 00:56:15 +04:00
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* initialize objects */
|
|
|
|
OBJ_CONSTRUCT(&mca_btl_usnic_component.usnic_procs, opal_list_t);
|
2014-07-31 00:56:15 +04:00
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* Sanity check: if_include and if_exclude need to be mutually
|
|
|
|
exclusive */
|
2014-07-31 00:56:15 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
mca_base_var_check_exclusive("opal",
|
2013-07-20 02:13:58 +04:00
|
|
|
mca_btl_usnic_component.super.btl_version.mca_type_name,
|
|
|
|
mca_btl_usnic_component.super.btl_version.mca_component_name,
|
|
|
|
"if_include",
|
|
|
|
mca_btl_usnic_component.super.btl_version.mca_type_name,
|
|
|
|
mca_btl_usnic_component.super.btl_version.mca_component_name,
|
|
|
|
"if_exclude")) {
|
|
|
|
/* Return ERR_NOT_AVAILABLE so that a warning message about
|
|
|
|
"open" failing is not printed */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_NOT_AVAILABLE;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
2014-07-31 00:56:15 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2014-07-31 00:56:15 +04:00
|
|
|
* Component cleanup
|
2013-07-20 02:13:58 +04:00
|
|
|
*/
|
|
|
|
static int usnic_component_close(void)
|
|
|
|
{
|
|
|
|
/* Note that this list should already be empty, because:
|
|
|
|
- module.finalize() is invoked before component.close()
|
|
|
|
- module.finalize() RELEASEs each proc that it was using
|
|
|
|
- this should drive down the ref count on procs to 0
|
|
|
|
- procs remove themselves from the component.usnic_procs list
|
|
|
|
in their destructor */
|
|
|
|
OBJ_DESTRUCT(&mca_btl_usnic_component.usnic_procs);
|
|
|
|
|
|
|
|
if (usnic_clock_timer_event_set) {
|
|
|
|
opal_event_del(&usnic_clock_timer_event);
|
|
|
|
usnic_clock_timer_event_set = false;
|
|
|
|
}
|
|
|
|
|
2014-02-27 02:21:25 +04:00
|
|
|
/* Finalize the connectivity client and agent */
|
|
|
|
if (mca_btl_usnic_component.connectivity_enabled) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_connectivity_client_finalize();
|
|
|
|
opal_btl_usnic_connectivity_agent_finalize();
|
2014-02-27 02:21:25 +04:00
|
|
|
}
|
|
|
|
|
2013-08-01 20:56:15 +04:00
|
|
|
free(mca_btl_usnic_component.usnic_all_modules);
|
|
|
|
free(mca_btl_usnic_component.usnic_active_modules);
|
2014-02-26 11:50:26 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_rtnl_sk_free(mca_btl_usnic_component.unlsk);
|
2014-02-26 11:50:26 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_BTL_USNIC_UNIT_TESTS
|
2014-02-26 11:47:50 +04:00
|
|
|
/* clean up the unit test infrastructure */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_cleanup_tests();
|
2014-02-26 11:47:50 +04:00
|
|
|
#endif
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Register UD address information. The MCA framework will make this
|
|
|
|
* available to all peers.
|
|
|
|
*/
|
|
|
|
static int usnic_modex_send(void)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
size_t i;
|
|
|
|
size_t size;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_addr_t* addrs = NULL;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
2014-07-31 00:56:15 +04:00
|
|
|
size = mca_btl_usnic_component.num_modules *
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
sizeof(opal_btl_usnic_addr_t);
|
2013-07-20 02:13:58 +04:00
|
|
|
if (size != 0) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
addrs = (opal_btl_usnic_addr_t*) malloc(size);
|
2013-07-20 02:13:58 +04:00
|
|
|
if (NULL == addrs) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < mca_btl_usnic_component.num_modules; i++) {
|
2014-07-31 00:56:15 +04:00
|
|
|
opal_btl_usnic_module_t* module =
|
2013-08-01 20:56:15 +04:00
|
|
|
mca_btl_usnic_component.usnic_active_modules[i];
|
2013-07-20 02:13:58 +04:00
|
|
|
addrs[i] = module->local_addr;
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: modex_send DQP:%d, CQP:%d, subnet = 0x%016" PRIx64 " interface =0x%016" PRIx64,
|
2014-07-31 00:56:15 +04:00
|
|
|
addrs[i].qp_num[USNIC_DATA_CHANNEL],
|
|
|
|
addrs[i].qp_num[USNIC_PRIORITY_CHANNEL],
|
2013-07-20 02:13:58 +04:00
|
|
|
ntoh64(addrs[i].gid.global.subnet_prefix),
|
|
|
|
ntoh64(addrs[i].gid.global.interface_id));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-07-31 00:56:15 +04:00
|
|
|
rc = opal_modex_send(&mca_btl_usnic_component.super.btl_version,
|
2013-07-20 02:13:58 +04:00
|
|
|
addrs, size);
|
|
|
|
if (NULL != addrs) {
|
|
|
|
free(addrs);
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* See if our memlock limit is >64K. 64K is the RHEL default memlock
|
|
|
|
* limit; this check is a first-line-of-defense hueristic to see if
|
|
|
|
* the user has set the memlock limit to *something*.
|
|
|
|
*
|
|
|
|
* We have other checks elsewhere (e.g., to ensure that QPs are able
|
|
|
|
* to be allocated -- which also require registered memory -- and to
|
|
|
|
* ensure that receive buffers can be registered, etc.), but this is a
|
|
|
|
* good first check to ensure that a default OS case is satisfied.
|
|
|
|
*/
|
|
|
|
static int check_reg_mem_basics(void)
|
|
|
|
{
|
|
|
|
#if HAVE_DECL_RLIMIT_MEMLOCK
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int ret = OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
struct rlimit limit;
|
|
|
|
char *str_limit = NULL;
|
|
|
|
|
|
|
|
ret = getrlimit(RLIMIT_MEMLOCK, &limit);
|
|
|
|
if (0 == ret) {
|
|
|
|
if ((long) limit.rlim_cur > (64 * 1024) ||
|
|
|
|
limit.rlim_cur == RLIM_INFINITY) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
} else {
|
|
|
|
asprintf(&str_limit, "%ld", (long)limit.rlim_cur);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
asprintf(&str_limit, "Unknown");
|
|
|
|
}
|
|
|
|
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "check_reg_mem_basics fail",
|
2013-07-20 02:13:58 +04:00
|
|
|
true,
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
str_limit);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2013-07-20 02:13:58 +04:00
|
|
|
#else
|
|
|
|
/* If we don't have RLIMIT_MEMLOCK, then just bypass this
|
|
|
|
safety/hueristic check. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2013-12-24 15:57:35 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static int read_device_sysfs(opal_btl_usnic_module_t *module, const char *name)
|
2013-07-24 04:38:32 +04:00
|
|
|
{
|
|
|
|
int ret, fd;
|
|
|
|
char filename[OPAL_PATH_MAX], line[256];
|
|
|
|
|
|
|
|
snprintf(filename, sizeof(filename), "/sys/class/infiniband/%s/%s",
|
|
|
|
ibv_get_device_name(module->device), name);
|
|
|
|
fd = open(filename, O_RDONLY);
|
|
|
|
if (fd < 0) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = read(fd, line, sizeof(line));
|
|
|
|
close(fd);
|
|
|
|
if (ret < 0) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return atoi(line);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_usnic_config(struct ibv_device_attr *device_attr,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_t *module,
|
2013-07-24 04:38:32 +04:00
|
|
|
int num_local_procs)
|
|
|
|
{
|
|
|
|
char str[128];
|
|
|
|
int num_vfs, qp_per_vf, cq_per_vf;
|
|
|
|
|
|
|
|
/* usNIC allocates QPs as a combination of PCI virtual functions
|
|
|
|
(VFs) and resources inside those VFs. Ensure that:
|
|
|
|
|
|
|
|
1. num_vfs (i.e., "usNICs") >= num_local_procs (to ensure that
|
|
|
|
each MPI process will be able to have its own protection
|
|
|
|
domain), and
|
|
|
|
2. num_vfs * num_qps_per_vf >= num_local_procs * NUM_CHANNELS
|
|
|
|
(to ensure that each MPI process will be able to get the
|
|
|
|
number of QPs it needs -- we know that every VF will have
|
|
|
|
the same number of QPs), and
|
|
|
|
3. num_vfs * num_cqs_per_vf >= num_local_procs * NUM_CHANNELS
|
|
|
|
(to ensure that each MPI process will be able to get the
|
|
|
|
number of CQs that it needs) */
|
|
|
|
num_vfs = read_device_sysfs(module, "max_vf");
|
|
|
|
qp_per_vf = read_device_sysfs(module, "qp_per_vf");
|
|
|
|
cq_per_vf = read_device_sysfs(module, "cq_per_vf");
|
|
|
|
if (num_vfs < 0 || qp_per_vf < 0 || cq_per_vf < 0) {
|
|
|
|
snprintf(str, sizeof(str), "Cannot read usNIC Linux verbs resources");
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (num_vfs < num_local_procs) {
|
|
|
|
snprintf(str, sizeof(str), "Not enough usNICs (found %d, need %d)",
|
|
|
|
num_vfs, num_local_procs);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (num_vfs * qp_per_vf < num_local_procs * USNIC_NUM_CHANNELS) {
|
2013-08-01 20:56:15 +04:00
|
|
|
snprintf(str, sizeof(str), "Not enough WQ/RQ (found %d, need %d)",
|
2013-07-24 04:38:32 +04:00
|
|
|
num_vfs * qp_per_vf,
|
|
|
|
num_local_procs * USNIC_NUM_CHANNELS);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
if (num_vfs * cq_per_vf < num_local_procs * USNIC_NUM_CHANNELS) {
|
|
|
|
snprintf(str, sizeof(str), "Not enough CQ per usNIC (found %d, need %d)",
|
|
|
|
num_vfs * cq_per_vf,
|
|
|
|
num_local_procs * USNIC_NUM_CHANNELS);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* All is good! */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-24 04:38:32 +04:00
|
|
|
|
|
|
|
error:
|
|
|
|
/* Sad panda */
|
|
|
|
opal_show_help("help-mpi-btl-usnic.txt",
|
|
|
|
"not enough usnic resources",
|
|
|
|
true,
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename,
|
2013-07-24 04:38:32 +04:00
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
str);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-24 04:38:32 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
static void
|
|
|
|
usnic_clock_callback(int fd, short flags, void *timeout)
|
|
|
|
{
|
|
|
|
/* 1ms == 1,000,000 ns */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ticks += 1000000;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* run progress to make sure time change gets noticed */
|
|
|
|
usnic_component_progress();
|
|
|
|
|
|
|
|
opal_event_add(&usnic_clock_timer_event, timeout);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* UD component initialization:
|
|
|
|
* (1) read interface list from kernel and compare against component
|
|
|
|
* parameters then create a BTL instance for selected interfaces
|
|
|
|
* (2) post OOB receive for incoming connection attempts
|
|
|
|
* (3) register BTL parameters with the MCA
|
|
|
|
*/
|
|
|
|
static mca_btl_base_module_t** usnic_component_init(int* num_btl_modules,
|
|
|
|
bool want_progress_threads,
|
|
|
|
bool want_mpi_threads)
|
|
|
|
{
|
|
|
|
mca_btl_base_module_t **btls = NULL;
|
|
|
|
uint32_t i, num_final_modules;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_t *module;
|
2013-07-20 02:13:58 +04:00
|
|
|
opal_list_item_t *item;
|
|
|
|
opal_list_t *port_list;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_common_verbs_port_item_t *port;
|
2013-07-20 02:13:58 +04:00
|
|
|
struct ibv_device_attr device_attr;
|
|
|
|
usnic_if_filter_t *filter;
|
|
|
|
bool keep_module;
|
|
|
|
bool filter_incl = false;
|
2014-02-26 11:50:26 +04:00
|
|
|
int min_distance, num_local_procs, err;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
*num_btl_modules = 0;
|
|
|
|
|
|
|
|
/* Currently refuse to run if MPI_THREAD_MULTIPLE is enabled */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (opal_using_threads() && !mca_btl_base_thread_multiple_override) {
|
2013-12-24 15:57:35 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: MPI_THREAD_MULTIPLE not supported; skipping this component");
|
2013-07-20 02:13:58 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Per https://svn.open-mpi.org/trac/ompi/ticket/1305, check to
|
|
|
|
see if $sysfsdir/class/infiniband exists. If it does not,
|
|
|
|
assume that the RDMA hardware drivers are not loaded, and
|
|
|
|
therefore we don't want OpenFabrics verbs support in this OMPI
|
|
|
|
job. No need to print a warning. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (!opal_common_verbs_check_basics()) {
|
2013-07-20 02:13:58 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do quick sanity check to ensure that we can lock memory (which
|
|
|
|
is required for verbs registered memory). */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != check_reg_mem_basics()) {
|
2013-07-20 02:13:58 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/************************************************************************
|
|
|
|
* Below this line, we assume that usnic is loaded on all procs,
|
|
|
|
* and therefore we will guarantee to the the modex send, even if
|
|
|
|
* we fail.
|
|
|
|
************************************************************************/
|
|
|
|
|
|
|
|
/* initialization */
|
2014-07-31 00:56:15 +04:00
|
|
|
mca_btl_usnic_component.my_hashed_rte_name =
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_proc_local_get()->proc_name;
|
2014-02-26 11:46:50 +04:00
|
|
|
MSGDEBUG1_OUT("%s: my_hashed_rte_name=0x%" PRIx64,
|
|
|
|
__func__, mca_btl_usnic_component.my_hashed_rte_name);
|
2013-07-20 02:13:58 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_srand(&opal_btl_usnic_rand_buff, ((uint32_t) getpid()));
|
2013-07-20 02:13:58 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
err = opal_btl_usnic_rtnl_sk_alloc(&mca_btl_usnic_component.unlsk);
|
2014-02-26 11:50:26 +04:00
|
|
|
if (0 != err) {
|
|
|
|
/* API returns negative errno values */
|
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "rtnetlink init fail",
|
2014-07-27 01:48:23 +04:00
|
|
|
true, opal_process_info.nodename, strerror(-err));
|
2014-02-26 11:50:26 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* Find the ports that we want to use. We do our own interface name
|
|
|
|
* filtering below, so don't let the verbs code see our
|
|
|
|
* if_include/if_exclude strings */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
port_list = opal_common_verbs_find_ports(NULL, NULL,
|
|
|
|
OPAL_COMMON_VERBS_FLAGS_TRANSPORT_USNIC_UDP,
|
2013-07-20 02:13:58 +04:00
|
|
|
USNIC_OUT);
|
2014-02-26 11:44:35 +04:00
|
|
|
if (NULL == port_list) {
|
2014-07-31 00:52:54 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2013-07-20 02:13:58 +04:00
|
|
|
goto free_include_list;
|
2014-02-26 11:44:35 +04:00
|
|
|
} else if (opal_list_get_size(port_list) > 0) {
|
|
|
|
mca_btl_usnic_component.use_udp = true;
|
|
|
|
opal_output_verbose(5, USNIC_OUT, "btl:usnic: using UDP transport");
|
|
|
|
} else {
|
|
|
|
OBJ_RELEASE(port_list);
|
|
|
|
|
|
|
|
/* If we got no USNIC_UDP transport devices, try again with
|
|
|
|
USNIC */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
port_list = opal_common_verbs_find_ports(NULL, NULL,
|
|
|
|
OPAL_COMMON_VERBS_FLAGS_TRANSPORT_USNIC,
|
2014-02-26 11:44:35 +04:00
|
|
|
USNIC_OUT);
|
|
|
|
|
|
|
|
if (NULL == port_list) {
|
2014-07-31 00:52:54 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2014-02-26 11:44:35 +04:00
|
|
|
goto free_include_list;
|
|
|
|
} else if (opal_list_get_size(port_list) > 0) {
|
|
|
|
mca_btl_usnic_component.use_udp = false;
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: using L2-only transport");
|
|
|
|
} else {
|
2014-04-21 23:28:48 +04:00
|
|
|
/* There's no usNICs, so bail... */
|
2014-04-22 22:59:10 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: no usNICs found");
|
2014-02-26 11:44:35 +04:00
|
|
|
goto free_include_list;
|
|
|
|
}
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
2014-06-02 22:09:55 +04:00
|
|
|
/* Setup the connectivity checking agent and client. */
|
|
|
|
if (mca_btl_usnic_component.connectivity_enabled) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != opal_btl_usnic_connectivity_agent_init() ||
|
|
|
|
OPAL_SUCCESS != opal_btl_usnic_connectivity_client_init()) {
|
2014-06-02 22:09:55 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-04 01:31:42 +04:00
|
|
|
/* Initialize the table of usnic extension function pointers */
|
|
|
|
item = opal_list_get_first(port_list);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
port = (opal_common_verbs_port_item_t*) item;
|
|
|
|
opal_btl_usnic_ext_init(port->device->context);
|
2014-03-04 01:31:42 +04:00
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* Setup an array of pointers to point to each module (which we'll
|
|
|
|
return upstream) */
|
|
|
|
mca_btl_usnic_component.num_modules = opal_list_get_size(port_list);
|
|
|
|
btls = (struct mca_btl_base_module_t**)
|
2014-07-31 00:56:15 +04:00
|
|
|
malloc(mca_btl_usnic_component.num_modules *
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
sizeof(opal_btl_usnic_module_t*));
|
2013-07-20 02:13:58 +04:00
|
|
|
if (NULL == btls) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2013-07-20 02:13:58 +04:00
|
|
|
goto free_include_list;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate space for btl module instances */
|
2013-08-01 20:56:15 +04:00
|
|
|
mca_btl_usnic_component.usnic_all_modules =
|
2013-07-20 02:13:58 +04:00
|
|
|
calloc(mca_btl_usnic_component.num_modules,
|
2013-08-01 20:56:15 +04:00
|
|
|
sizeof(*mca_btl_usnic_component.usnic_all_modules));
|
|
|
|
mca_btl_usnic_component.usnic_active_modules =
|
2013-09-06 07:21:34 +04:00
|
|
|
calloc(mca_btl_usnic_component.num_modules,
|
2013-08-01 20:56:15 +04:00
|
|
|
sizeof(*mca_btl_usnic_component.usnic_active_modules));
|
|
|
|
if (NULL == mca_btl_usnic_component.usnic_all_modules ||
|
|
|
|
NULL == mca_btl_usnic_component.usnic_active_modules) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2013-08-01 20:56:15 +04:00
|
|
|
goto error;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If we have include or exclude list, parse and set up now
|
|
|
|
* (higher level guarantees there will not be both include and exclude,
|
|
|
|
* so don't bother checking that here)
|
|
|
|
*/
|
|
|
|
if (NULL != mca_btl_usnic_component.if_include) {
|
|
|
|
|
|
|
|
opal_output_verbose(20, USNIC_OUT,
|
|
|
|
"btl:usnic:filter_module: if_include=%s",
|
|
|
|
mca_btl_usnic_component.if_include);
|
|
|
|
|
|
|
|
filter_incl = true;
|
|
|
|
filter = parse_ifex_str(mca_btl_usnic_component.if_include, "include");
|
|
|
|
} else if (NULL != mca_btl_usnic_component.if_exclude) {
|
|
|
|
|
|
|
|
opal_output_verbose(20, USNIC_OUT,
|
|
|
|
"btl:usnic:filter_module: if_exclude=%s",
|
|
|
|
mca_btl_usnic_component.if_exclude);
|
|
|
|
|
|
|
|
filter_incl = false;
|
|
|
|
filter = parse_ifex_str(mca_btl_usnic_component.if_exclude, "exclude");
|
|
|
|
} else {
|
|
|
|
filter = NULL;
|
|
|
|
}
|
|
|
|
|
2014-07-27 01:48:23 +04:00
|
|
|
num_local_procs = opal_process_info.num_local_peers;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Go through the list of ports and determine if we want it or
|
|
|
|
not. Create and (mostly) fill a module struct for each port
|
|
|
|
that we want. */
|
|
|
|
for (i = 0, item = opal_list_get_first(port_list);
|
|
|
|
item != opal_list_get_end(port_list) &&
|
|
|
|
(0 == mca_btl_usnic_component.max_modules ||
|
|
|
|
i < mca_btl_usnic_component.max_modules);
|
2013-08-01 20:56:15 +04:00
|
|
|
item = opal_list_get_next(item)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
port = (opal_common_verbs_port_item_t*) item;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: found: device %s, port %d",
|
|
|
|
port->device->device_name, port->port_num);
|
|
|
|
|
|
|
|
/* Fill in a bunch of the module struct */
|
2013-08-01 20:56:15 +04:00
|
|
|
module = &(mca_btl_usnic_component.usnic_all_modules[i]);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != init_module_from_port(module, port)) {
|
2013-07-20 02:13:58 +04:00
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
continue; /* next port */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* respect if_include/if_exclude subnets/ifaces from the user */
|
|
|
|
if (filter != NULL) {
|
|
|
|
keep_module = filter_module(module, filter, filter_incl);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: %s module %s due to %s",
|
|
|
|
(keep_module ? "keeping" : "skipping"),
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
(filter_incl ? "if_include" : "if_exclude"));
|
|
|
|
if (!keep_module) {
|
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
continue; /* next port */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Query this device */
|
|
|
|
if (0 != ibv_query_device(module->device_context, &device_attr)) {
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "ibv API failed",
|
2014-07-31 00:56:15 +04:00
|
|
|
true,
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
ibv_get_device_name(module->device),
|
2014-06-18 19:20:50 +04:00
|
|
|
module->if_name,
|
2013-07-20 02:13:58 +04:00
|
|
|
"ibv_query_device", __FILE__, __LINE__,
|
|
|
|
"Failed to query usNIC device; is the usnic_verbs Linux kernel module loaded?");
|
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
opal_memchecker_base_mem_defined(&device_attr, sizeof(device_attr));
|
|
|
|
|
2013-07-24 04:38:32 +04:00
|
|
|
/* Check some usNIC configuration minimum settings */
|
|
|
|
if (check_usnic_config(&device_attr, module,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
num_local_procs) != OPAL_SUCCESS) {
|
2013-07-20 02:13:58 +04:00
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2014-03-04 01:31:42 +04:00
|
|
|
/* Tell this device's context that we are aware that we need
|
|
|
|
to request the UD header length. If it fails, just skip
|
|
|
|
this device. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (NULL != opal_btl_usnic_ext.enable_udp) {
|
2014-03-04 01:31:42 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: enabling UDP support for %s",
|
|
|
|
ibv_get_device_name(module->device));
|
|
|
|
if (0 !=
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ext.enable_udp(port->device->context)) {
|
2014-03-04 01:31:42 +04:00
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: UDP support unexpectedly failed for %s; ignoring this device",
|
|
|
|
ibv_get_device_name(module->device));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
int len =
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ext.get_ud_header_len(port->device->context,
|
2014-03-04 01:31:42 +04:00
|
|
|
port->port_num);
|
|
|
|
/* Sanity check: the len we get back should be 42. If
|
|
|
|
it's not, skip this device. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_BTL_USNIC_UDP_HDR_SZ != len) {
|
2014-03-04 01:31:42 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: unexpected UD header length for %s reported by extension (%d); ignoring this device",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
len);
|
|
|
|
--mca_btl_usnic_component.num_modules;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* How many xQ entries do we want? */
|
|
|
|
if (-1 == mca_btl_usnic_component.sd_num) {
|
|
|
|
module->sd_num = device_attr.max_qp_wr;
|
|
|
|
} else {
|
|
|
|
module->sd_num = mca_btl_usnic_component.sd_num;
|
|
|
|
}
|
|
|
|
if (-1 == mca_btl_usnic_component.rd_num) {
|
|
|
|
module->rd_num = device_attr.max_qp_wr;
|
|
|
|
} else {
|
|
|
|
module->rd_num = mca_btl_usnic_component.rd_num;
|
|
|
|
}
|
|
|
|
if (-1 == mca_btl_usnic_component.cq_num) {
|
|
|
|
module->cq_num = device_attr.max_cqe;
|
|
|
|
} else {
|
|
|
|
module->cq_num = mca_btl_usnic_component.cq_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Queue sizes for priority channel scale with # of endpoint. A
|
|
|
|
* little bit of chicken and egg here, we really want
|
|
|
|
* procs*ports, but we can't know # of ports until we try to
|
|
|
|
* initialize, so 32*num_procs is best guess. User can always
|
|
|
|
* override.
|
|
|
|
*/
|
|
|
|
if (-1 == mca_btl_usnic_component.prio_sd_num) {
|
2013-08-01 20:56:15 +04:00
|
|
|
module->prio_sd_num =
|
2014-07-31 00:55:26 +04:00
|
|
|
max(128, 32 * USNIC_MCW_SIZE) - 1;
|
2013-07-20 02:13:58 +04:00
|
|
|
} else {
|
|
|
|
module->prio_sd_num = mca_btl_usnic_component.prio_sd_num;
|
|
|
|
}
|
|
|
|
if (module->prio_sd_num > device_attr.max_qp_wr) {
|
|
|
|
module->prio_sd_num = device_attr.max_qp_wr;
|
|
|
|
}
|
|
|
|
if (-1 == mca_btl_usnic_component.prio_rd_num) {
|
2013-08-01 20:56:15 +04:00
|
|
|
module->prio_rd_num =
|
2014-07-31 00:55:26 +04:00
|
|
|
max(128, 32 * USNIC_MCW_SIZE) - 1;
|
2013-07-20 02:13:58 +04:00
|
|
|
} else {
|
|
|
|
module->prio_rd_num = mca_btl_usnic_component.prio_rd_num;
|
|
|
|
}
|
|
|
|
if (module->prio_rd_num > device_attr.max_qp_wr) {
|
|
|
|
module->prio_rd_num = device_attr.max_qp_wr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Find the max payload this port can handle */
|
|
|
|
module->max_frag_payload =
|
|
|
|
module->if_mtu - /* start with the MTU */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_BTL_USNIC_PROTO_HDR_SZ -
|
|
|
|
sizeof(opal_btl_usnic_btl_header_t); /* subtract size of
|
2013-07-20 02:13:58 +04:00
|
|
|
the BTL header */
|
|
|
|
/* same, but use chunk header */
|
|
|
|
module->max_chunk_payload =
|
|
|
|
module->if_mtu -
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_BTL_USNIC_PROTO_HDR_SZ -
|
|
|
|
sizeof(opal_btl_usnic_btl_chunk_header_t);
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Priorirty queue MTU and max size */
|
|
|
|
if (0 == module->tiny_mtu) {
|
|
|
|
module->tiny_mtu = 768;
|
|
|
|
module->max_tiny_payload = module->tiny_mtu -
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_BTL_USNIC_PROTO_HDR_SZ -
|
|
|
|
sizeof(opal_btl_usnic_btl_header_t);
|
2013-07-20 02:13:58 +04:00
|
|
|
} else {
|
|
|
|
module->tiny_mtu = module->max_tiny_payload +
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_BTL_USNIC_PROTO_HDR_SZ +
|
|
|
|
sizeof(opal_btl_usnic_btl_header_t);
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If the eager rndv limit is 0, initialize it to default */
|
|
|
|
if (0 == module->super.btl_rndv_eager_limit) {
|
|
|
|
module->super.btl_rndv_eager_limit = USNIC_DFLT_RNDV_EAGER_LIMIT;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Make a hash table of senders */
|
|
|
|
OBJ_CONSTRUCT(&module->senders, opal_hash_table_t);
|
|
|
|
/* JMS This is a fixed size -- BAD! But since hash table
|
|
|
|
doesn't grow dynamically, I don't know what size to put
|
|
|
|
here. I think the long-term solution is to write a better
|
|
|
|
hash table... :-( */
|
|
|
|
opal_hash_table_init(&module->senders, 4096);
|
|
|
|
|
|
|
|
/* Let this module advance to the next round! */
|
2013-08-01 20:56:15 +04:00
|
|
|
btls[i++] = &(module->super);
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* free filter if created */
|
|
|
|
if (filter != NULL) {
|
|
|
|
free_filter(filter);
|
|
|
|
filter = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do final module initialization with anything that required
|
|
|
|
knowing how many modules there would be. */
|
|
|
|
for (num_final_modules = i = 0;
|
|
|
|
i < mca_btl_usnic_component.num_modules; ++i) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
module = (opal_btl_usnic_module_t*) btls[i];
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* If the eager send limit is 0, initialize it to default */
|
|
|
|
if (0 == module->super.btl_eager_limit) {
|
|
|
|
/* 150k for 1 module, 25k for >1 module */
|
|
|
|
if (1 == mca_btl_usnic_component.num_modules) {
|
|
|
|
module->super.btl_eager_limit =
|
|
|
|
USNIC_DFLT_EAGER_LIMIT_1DEVICE;
|
|
|
|
} else {
|
|
|
|
module->super.btl_eager_limit =
|
|
|
|
USNIC_DFLT_EAGER_LIMIT_NDEVICES;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Since we emulate PUT, max_send_size can be same as
|
|
|
|
eager_limit */
|
|
|
|
module->super.btl_max_send_size = module->super.btl_eager_limit;
|
|
|
|
|
|
|
|
/* Initialize this module's state */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (opal_btl_usnic_module_init(module) != OPAL_SUCCESS) {
|
2013-07-20 02:13:58 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: failed to init module for %s:%d",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**** If we get here, this is a good module/port -- we want
|
|
|
|
it ****/
|
|
|
|
|
|
|
|
/* Tell the common_verbs_device to not free the device context
|
|
|
|
when the list is freed. Then free the port pointer cached
|
|
|
|
on this module; it was only used to carry this
|
|
|
|
module<-->port association down to this second loop. The
|
|
|
|
port item will be freed later, and is of no more use to the
|
|
|
|
module. */
|
|
|
|
module->port->device->destructor_free_context = false;
|
|
|
|
module->port = NULL;
|
|
|
|
|
|
|
|
/* If module_init() failed for any prior module, this will be
|
|
|
|
a down shift in the btls[] array. Otherwise, it's an
|
|
|
|
overwrite of the same value. */
|
|
|
|
btls[num_final_modules++] = &(module->super);
|
|
|
|
|
|
|
|
/* Output all of this module's values. */
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: num sqe=%d, num rqe=%d, num cqe=%d",
|
|
|
|
module->sd_num,
|
|
|
|
module->rd_num,
|
|
|
|
module->cq_num);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
2014-02-26 11:40:37 +04:00
|
|
|
"btl:usnic: priority MTU %s:%d = %" PRIsize_t,
|
2013-07-20 02:13:58 +04:00
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->tiny_mtu);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: priority limit %s:%d = %" PRIsize_t,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->max_tiny_payload);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: eager limit %s:%d = %" PRIsize_t,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->super.btl_eager_limit);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: eager rndv limit %s:%d = %" PRIsize_t,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->super.btl_rndv_eager_limit);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
2014-07-31 00:56:15 +04:00
|
|
|
"btl:usnic: max send size %s:%d = %" PRIsize_t
|
2013-07-20 02:13:58 +04:00
|
|
|
" (not overrideable)",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->super.btl_max_send_size);
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: exclusivity %s:%d = %d",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->super.btl_exclusivity);
|
|
|
|
}
|
|
|
|
|
2013-08-01 20:56:15 +04:00
|
|
|
/* We may have skipped some modules, so reset
|
|
|
|
component.num_modules */
|
|
|
|
mca_btl_usnic_component.num_modules = num_final_modules;
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
/* We've packed all the modules and pointers to those modules in
|
|
|
|
the lower ends of their respective arrays. If not all the
|
|
|
|
modules initialized successfully, we're wasting a little space.
|
|
|
|
We could realloc and re-form the btls[] array, but it doesn't
|
|
|
|
seem worth it. Just waste a little space.
|
|
|
|
|
|
|
|
That being said, if we ended up with zero acceptable ports,
|
|
|
|
then free everything. */
|
|
|
|
if (0 == num_final_modules) {
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: returning 0 modules");
|
2013-08-01 20:56:15 +04:00
|
|
|
goto error;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
2013-08-01 20:56:15 +04:00
|
|
|
/* we have a nonzero number of modules, so save a copy of the btls array
|
|
|
|
* for later use */
|
|
|
|
memcpy(mca_btl_usnic_component.usnic_active_modules, btls,
|
|
|
|
num_final_modules*sizeof(*btls));
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Loop over the modules and find the minimum value for
|
|
|
|
module->numa_distance. For every module that has a
|
|
|
|
numa_distance higher than the minimum value, increase its btl
|
|
|
|
latency rating so that the PML will prefer to send short
|
|
|
|
messages over "near" modules. */
|
|
|
|
min_distance = 9999999;
|
|
|
|
for (i = 0; i < mca_btl_usnic_component.num_modules; ++i) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
module = (opal_btl_usnic_module_t*) btls[i];
|
2013-07-20 02:13:58 +04:00
|
|
|
if (module->numa_distance < min_distance) {
|
|
|
|
min_distance = module->numa_distance;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for (i = 0; i < mca_btl_usnic_component.num_modules; ++i) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
module = (opal_btl_usnic_module_t*) btls[i];
|
2013-07-20 02:13:58 +04:00
|
|
|
if (module->numa_distance > min_distance) {
|
|
|
|
++module->super.btl_latency;
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: %s is far from me; increasing latency rating",
|
|
|
|
ibv_get_device_name(module->device));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* start timer to guarantee synthetic clock advances */
|
|
|
|
opal_event_set(opal_event_base, &usnic_clock_timer_event,
|
|
|
|
-1, 0, usnic_clock_callback,
|
|
|
|
&usnic_clock_timeout);
|
|
|
|
usnic_clock_timer_event_set = true;
|
|
|
|
|
|
|
|
/* 1ms timer */
|
|
|
|
usnic_clock_timeout.tv_sec = 0;
|
|
|
|
usnic_clock_timeout.tv_usec = 1000;
|
|
|
|
opal_event_add(&usnic_clock_timer_event, &usnic_clock_timeout);
|
|
|
|
|
Move all usNIC stats to _stats.c|h and export them as MPI_T pvars.
This commit moves all the module stats into their own struct so that
the stats only need to appear as a single line in the module_t
definition, and then moves all the logic for reporting the stats into
btl_usnic_stats.c|h.
Further, the stats are now exported as MPI_T_BIND_NO_OBJECT entities
(i.e., not bound to any particular MPI handle), and are marked as
READONLY and CONTINUOUS. They currently all default to verbose level
5 ("Application tuner / detailed", according to
https://svn.open-mpi.org/trac/ompi/wiki/MCAParamLevels).
Most of the statistics are counters, but a small number are high
watermark values. Due to how counters are reported via MPI_T, none of
the counters are exported through MPI_T if the MCA param
btl_usnic_stats_relative=1 (i.e., the module resets the stats back to
zero at a given frequency).
When MPI_T_pvar_handle_alloc() is invoked on any of these pvars, it
will return a count that is equal to the number of active usnic BTL
modules. The values returned for any given pvar (e.g.,
num_total_sends) are an array containing one value for each active
usnic BTL module. The ordering of values in the array is both
consistent across all usnic pvars and stable throughout a single job:
array slot 0 corresponds to module X, array slot 1 corresponds to
module Y, etc.
Mapping which array slot corresponds to which underlying Linux usnic_X
device works as follows:
* The btl_usnic_devices MPI_T state pvar is associated with a
btl_usnic_device MPI_T enum, and be obtained via
MPI_T_pvar_get_info().
* If all usNIC pvars are of length N, the values [0,N) in the
btl_usnic_device enum are associated with strings of the
corresponding underlying Linux device.
For exampe, to look up which Linux device is reported in all usNIC
pvars' array slot 1, look up the int value 1 in the btl_usnic_devices
enum. Its corresponding string value is underlying Linux device name
(e.g., "usnic_1").
cmr=v1.7.4:subject="usnic BTL MPI_T pvars"
This commit was SVN r29545.
2013-10-29 02:23:08 +04:00
|
|
|
/* Setup MPI_T performance variables */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_setup_mpit_pvars();
|
Move all usNIC stats to _stats.c|h and export them as MPI_T pvars.
This commit moves all the module stats into their own struct so that
the stats only need to appear as a single line in the module_t
definition, and then moves all the logic for reporting the stats into
btl_usnic_stats.c|h.
Further, the stats are now exported as MPI_T_BIND_NO_OBJECT entities
(i.e., not bound to any particular MPI handle), and are marked as
READONLY and CONTINUOUS. They currently all default to verbose level
5 ("Application tuner / detailed", according to
https://svn.open-mpi.org/trac/ompi/wiki/MCAParamLevels).
Most of the statistics are counters, but a small number are high
watermark values. Due to how counters are reported via MPI_T, none of
the counters are exported through MPI_T if the MCA param
btl_usnic_stats_relative=1 (i.e., the module resets the stats back to
zero at a given frequency).
When MPI_T_pvar_handle_alloc() is invoked on any of these pvars, it
will return a count that is equal to the number of active usnic BTL
modules. The values returned for any given pvar (e.g.,
num_total_sends) are an array containing one value for each active
usnic BTL module. The ordering of values in the array is both
consistent across all usnic pvars and stable throughout a single job:
array slot 0 corresponds to module X, array slot 1 corresponds to
module Y, etc.
Mapping which array slot corresponds to which underlying Linux usnic_X
device works as follows:
* The btl_usnic_devices MPI_T state pvar is associated with a
btl_usnic_device MPI_T enum, and be obtained via
MPI_T_pvar_get_info().
* If all usNIC pvars are of length N, the values [0,N) in the
btl_usnic_device enum are associated with strings of the
corresponding underlying Linux device.
For exampe, to look up which Linux device is reported in all usNIC
pvars' array slot 1, look up the int value 1 in the btl_usnic_devices
enum. Its corresponding string value is underlying Linux device name
(e.g., "usnic_1").
cmr=v1.7.4:subject="usnic BTL MPI_T pvars"
This commit was SVN r29545.
2013-10-29 02:23:08 +04:00
|
|
|
|
|
|
|
/* All done */
|
2013-07-20 02:13:58 +04:00
|
|
|
*num_btl_modules = mca_btl_usnic_component.num_modules;
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: returning %d modules", *num_btl_modules);
|
|
|
|
|
|
|
|
free_include_list:
|
|
|
|
if (NULL != port_list) {
|
|
|
|
while (NULL != (item = opal_list_remove_first(port_list))) {
|
|
|
|
OBJ_RELEASE(item);
|
|
|
|
}
|
|
|
|
OBJ_RELEASE(port_list);
|
|
|
|
}
|
|
|
|
|
|
|
|
usnic_modex_send();
|
|
|
|
return btls;
|
2013-08-01 20:56:15 +04:00
|
|
|
|
|
|
|
error:
|
|
|
|
/* clean up as much allocated memory as possible */
|
|
|
|
free(btls);
|
|
|
|
btls = NULL;
|
|
|
|
free(mca_btl_usnic_component.usnic_all_modules);
|
|
|
|
mca_btl_usnic_component.usnic_all_modules = NULL;
|
|
|
|
free(mca_btl_usnic_component.usnic_active_modules);
|
|
|
|
mca_btl_usnic_component.usnic_active_modules = NULL;
|
|
|
|
goto free_include_list;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Component progress
|
2013-09-06 07:18:57 +04:00
|
|
|
* The fast-path of an incoming packet available on the priority
|
|
|
|
* receive queue is handled directly in this routine, everything else
|
|
|
|
* is deferred to an external call, usnic_component_progress_2()
|
|
|
|
* This helps keep usnic_component_progress() very small and very responsive
|
2014-07-31 00:56:15 +04:00
|
|
|
* to a single incoming packet. We make sure not to always return
|
2013-09-06 07:18:57 +04:00
|
|
|
* immediately after one packet to avoid starvation, "fastpath_ok" is
|
|
|
|
* used for this.
|
2013-07-20 02:13:58 +04:00
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static int usnic_handle_completion(opal_btl_usnic_module_t* module,
|
|
|
|
opal_btl_usnic_channel_t *channel, struct ibv_wc *cwc);
|
2013-09-06 07:18:57 +04:00
|
|
|
static int usnic_component_progress_2(void);
|
|
|
|
|
2013-07-20 02:13:58 +04:00
|
|
|
static int usnic_component_progress(void)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
2013-09-06 07:18:57 +04:00
|
|
|
int count;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_recv_segment_t* rseg;
|
|
|
|
opal_btl_usnic_module_t* module;
|
2013-09-06 07:18:57 +04:00
|
|
|
struct ibv_wc wc;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_channel_t *channel;
|
2013-09-06 07:18:57 +04:00
|
|
|
static bool fastpath_ok=true;
|
|
|
|
|
|
|
|
/* update our simulated clock */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ticks += 5000;
|
2013-09-06 07:18:57 +04:00
|
|
|
|
|
|
|
count = 0;
|
|
|
|
if (fastpath_ok) {
|
|
|
|
for (i = 0; i < mca_btl_usnic_component.num_modules; i++) {
|
|
|
|
module = mca_btl_usnic_component.usnic_active_modules[i];
|
|
|
|
channel = &module->mod_channels[USNIC_PRIORITY_CHANNEL];
|
|
|
|
|
|
|
|
assert(channel->chan_deferred_recv == NULL);
|
|
|
|
|
|
|
|
if (ibv_poll_cq(channel->cq, 1, &wc) == 1) {
|
|
|
|
if (OPAL_LIKELY(wc.opcode == IBV_WC_RECV &&
|
|
|
|
wc.status == IBV_WC_SUCCESS)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rseg = (opal_btl_usnic_recv_segment_t*)(intptr_t)wc.wr_id;
|
|
|
|
opal_btl_usnic_recv_fast(module, rseg, channel,
|
2014-02-26 11:47:19 +04:00
|
|
|
wc.byte_len);
|
2013-09-06 07:18:57 +04:00
|
|
|
fastpath_ok = false; /* prevent starvation */
|
|
|
|
return 1;
|
|
|
|
} else {
|
|
|
|
count += usnic_handle_completion(module, channel, &wc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fastpath_ok = true;
|
|
|
|
return count + usnic_component_progress_2();
|
|
|
|
}
|
|
|
|
|
|
|
|
static int usnic_handle_completion(
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_t* module,
|
|
|
|
opal_btl_usnic_channel_t *channel,
|
2013-09-06 07:18:57 +04:00
|
|
|
struct ibv_wc *cwc)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_segment_t* seg;
|
|
|
|
opal_btl_usnic_recv_segment_t* rseg;
|
2013-09-06 07:18:57 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
seg = (opal_btl_usnic_segment_t*)(unsigned long)cwc->wr_id;
|
|
|
|
rseg = (opal_btl_usnic_recv_segment_t*)seg;
|
2013-09-06 07:18:57 +04:00
|
|
|
|
|
|
|
if (OPAL_UNLIKELY(cwc->status != IBV_WC_SUCCESS)) {
|
|
|
|
|
|
|
|
/* If it was a receive error, just drop it and keep
|
|
|
|
going. The sender will eventually re-send it. */
|
|
|
|
if (IBV_WC_RECV == cwc->opcode) {
|
|
|
|
if (cwc->byte_len <
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
(OPAL_BTL_USNIC_PROTO_HDR_SZ +
|
|
|
|
sizeof(opal_btl_usnic_btl_header_t))) {
|
2014-08-13 19:01:20 +04:00
|
|
|
uint32_t m = mca_btl_usnic_component.max_short_packets;
|
|
|
|
++module->num_short_packets;
|
|
|
|
if (OPAL_UNLIKELY(0 != m &&
|
|
|
|
module->num_short_packets >= m)) {
|
|
|
|
opal_show_help("help-mpi-btl-usnic.txt",
|
|
|
|
"received too many short packets",
|
|
|
|
true,
|
|
|
|
opal_process_info.nodename,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->if_name,
|
|
|
|
module->num_short_packets);
|
|
|
|
|
|
|
|
/* Reset so that we only show this warning once
|
|
|
|
per MPI process */
|
|
|
|
mca_btl_usnic_component.max_short_packets = 0;
|
|
|
|
}
|
2013-09-06 07:18:57 +04:00
|
|
|
} else {
|
|
|
|
/* silently count CRC errors */
|
Move all usNIC stats to _stats.c|h and export them as MPI_T pvars.
This commit moves all the module stats into their own struct so that
the stats only need to appear as a single line in the module_t
definition, and then moves all the logic for reporting the stats into
btl_usnic_stats.c|h.
Further, the stats are now exported as MPI_T_BIND_NO_OBJECT entities
(i.e., not bound to any particular MPI handle), and are marked as
READONLY and CONTINUOUS. They currently all default to verbose level
5 ("Application tuner / detailed", according to
https://svn.open-mpi.org/trac/ompi/wiki/MCAParamLevels).
Most of the statistics are counters, but a small number are high
watermark values. Due to how counters are reported via MPI_T, none of
the counters are exported through MPI_T if the MCA param
btl_usnic_stats_relative=1 (i.e., the module resets the stats back to
zero at a given frequency).
When MPI_T_pvar_handle_alloc() is invoked on any of these pvars, it
will return a count that is equal to the number of active usnic BTL
modules. The values returned for any given pvar (e.g.,
num_total_sends) are an array containing one value for each active
usnic BTL module. The ordering of values in the array is both
consistent across all usnic pvars and stable throughout a single job:
array slot 0 corresponds to module X, array slot 1 corresponds to
module Y, etc.
Mapping which array slot corresponds to which underlying Linux usnic_X
device works as follows:
* The btl_usnic_devices MPI_T state pvar is associated with a
btl_usnic_device MPI_T enum, and be obtained via
MPI_T_pvar_get_info().
* If all usNIC pvars are of length N, the values [0,N) in the
btl_usnic_device enum are associated with strings of the
corresponding underlying Linux device.
For exampe, to look up which Linux device is reported in all usNIC
pvars' array slot 1, look up the int value 1 in the btl_usnic_devices
enum. Its corresponding string value is underlying Linux device name
(e.g., "usnic_1").
cmr=v1.7.4:subject="usnic BTL MPI_T pvars"
This commit was SVN r29545.
2013-10-29 02:23:08 +04:00
|
|
|
++module->stats.num_crc_errors;
|
2013-09-06 07:18:57 +04:00
|
|
|
}
|
|
|
|
rseg->rs_recv_desc.next = channel->repost_recv_head;
|
|
|
|
channel->repost_recv_head = &rseg->rs_recv_desc;
|
|
|
|
return 0;
|
|
|
|
} else {
|
2014-08-13 19:01:20 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt",
|
|
|
|
"non-receive completion error",
|
|
|
|
true,
|
|
|
|
opal_process_info.nodename,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->if_name,
|
|
|
|
channel->chan_index,
|
|
|
|
cwc->status,
|
|
|
|
(void*) cwc->wr_id,
|
|
|
|
cwc->opcode,
|
|
|
|
cwc->vendor_err);
|
2013-09-06 07:18:57 +04:00
|
|
|
|
|
|
|
/* mark error on this channel */
|
|
|
|
channel->chan_error = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Handle work completions */
|
|
|
|
switch(seg->us_type) {
|
|
|
|
|
|
|
|
/**** Send ACK completions ****/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_SEG_ACK:
|
2013-09-06 07:18:57 +04:00
|
|
|
assert(IBV_WC_SEND == cwc->opcode);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ack_complete(module,
|
|
|
|
(opal_btl_usnic_ack_segment_t *)seg);
|
2013-09-06 07:18:57 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
/**** Send of frag segment completion ****/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_SEG_FRAG:
|
2013-09-06 07:18:57 +04:00
|
|
|
assert(IBV_WC_SEND == cwc->opcode);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_frag_send_complete(module,
|
|
|
|
(opal_btl_usnic_frag_segment_t*)seg);
|
2013-09-06 07:18:57 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
/**** Send of chunk segment completion ****/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_SEG_CHUNK:
|
2013-09-06 07:18:57 +04:00
|
|
|
assert(IBV_WC_SEND == cwc->opcode);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_chunk_send_complete(module,
|
|
|
|
(opal_btl_usnic_chunk_segment_t*)seg);
|
2013-09-06 07:18:57 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
/**** Receive completions ****/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_SEG_RECV:
|
2013-09-06 07:18:57 +04:00
|
|
|
assert(IBV_WC_RECV == cwc->opcode);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_recv(module, rseg, channel, cwc->byte_len);
|
2013-09-06 07:18:57 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
BTL_ERROR(("Unhandled completion opcode %d segment type %d",
|
|
|
|
cwc->opcode, seg->us_type));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int usnic_component_progress_2(void)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
int j, count = 0, num_events;
|
|
|
|
struct ibv_recv_wr *bad_wr;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_t* module;
|
|
|
|
static struct ibv_wc wc[OPAL_BTL_USNIC_NUM_WC];
|
|
|
|
opal_btl_usnic_channel_t *channel;
|
2013-09-06 07:18:57 +04:00
|
|
|
int rc;
|
2013-07-20 02:13:58 +04:00
|
|
|
int c;
|
|
|
|
|
|
|
|
/* update our simulated clock */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_ticks += 5000;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Poll for completions */
|
|
|
|
for (i = 0; i < mca_btl_usnic_component.num_modules; i++) {
|
2013-08-01 20:56:15 +04:00
|
|
|
module = mca_btl_usnic_component.usnic_active_modules[i];
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* poll each channel */
|
|
|
|
for (c=0; c<USNIC_NUM_CHANNELS; ++c) {
|
|
|
|
channel = &module->mod_channels[c];
|
|
|
|
|
2013-09-06 07:18:57 +04:00
|
|
|
if (channel->chan_deferred_recv != NULL) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
(void) opal_btl_usnic_recv_frag_bookkeeping(module,
|
2013-09-06 07:18:57 +04:00
|
|
|
channel->chan_deferred_recv, channel);
|
|
|
|
channel->chan_deferred_recv = NULL;
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
num_events = ibv_poll_cq(channel->cq, OPAL_BTL_USNIC_NUM_WC, wc);
|
2013-07-20 02:13:58 +04:00
|
|
|
opal_memchecker_base_mem_defined(&num_events, sizeof(num_events));
|
|
|
|
opal_memchecker_base_mem_defined(wc, sizeof(wc[0]) * num_events);
|
|
|
|
if (OPAL_UNLIKELY(num_events < 0)) {
|
2013-08-01 20:56:15 +04:00
|
|
|
BTL_ERROR(("%s: error polling CQ[%d] with %d: %s",
|
|
|
|
ibv_get_device_name(module->device), c,
|
2013-07-20 02:13:58 +04:00
|
|
|
num_events, strerror(errno)));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
2013-09-06 07:18:57 +04:00
|
|
|
/* Handle each event */
|
2013-07-20 02:13:58 +04:00
|
|
|
for (j = 0; j < num_events; j++) {
|
2013-09-06 07:18:57 +04:00
|
|
|
count += usnic_handle_completion(module, channel, &wc[j]);
|
|
|
|
}
|
2013-07-20 02:13:58 +04:00
|
|
|
|
2013-09-06 07:18:57 +04:00
|
|
|
/* return error if detected - this may be slightly deferred
|
|
|
|
* since fastpath avoids the "if" of checking this.
|
|
|
|
*/
|
|
|
|
if (channel->chan_error) {
|
|
|
|
channel->chan_error = false;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* progress sends */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_progress_sends(module);
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Re-post all the remaining receive buffers */
|
2013-09-06 07:18:57 +04:00
|
|
|
if (OPAL_LIKELY(channel->repost_recv_head)) {
|
2014-07-31 00:56:15 +04:00
|
|
|
rc = ibv_post_recv(channel->qp,
|
2013-09-06 07:18:57 +04:00
|
|
|
channel->repost_recv_head, &bad_wr);
|
|
|
|
channel->repost_recv_head = NULL;
|
|
|
|
if (OPAL_UNLIKELY(rc != 0)) {
|
2013-07-20 02:13:58 +04:00
|
|
|
BTL_ERROR(("error posting recv: %s\n", strerror(errno)));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
|
|
|
|
/* returns OPAL_SUCCESS if module initialization was successful, OPAL_ERROR
|
2013-07-20 02:13:58 +04:00
|
|
|
* otherwise */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static int init_module_from_port(opal_btl_usnic_module_t *module,
|
|
|
|
opal_common_verbs_port_item_t *port)
|
2013-07-20 02:13:58 +04:00
|
|
|
{
|
|
|
|
union ibv_gid gid;
|
|
|
|
char my_ip_string[32];
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
memcpy(module, &opal_btl_usnic_module_template,
|
|
|
|
sizeof(opal_btl_usnic_module_t));
|
2013-07-20 02:13:58 +04:00
|
|
|
module->port = port;
|
|
|
|
module->device = port->device->device;
|
|
|
|
module->device_context = port->device->context;
|
|
|
|
module->port_num = port->port_num;
|
|
|
|
module->numa_distance = 0;
|
2014-05-08 20:43:50 +04:00
|
|
|
module->local_addr.use_udp = mca_btl_usnic_component.use_udp;
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* If we fail to query the GID, just warn and skip this port */
|
|
|
|
if (0 != ibv_query_gid(module->device_context,
|
|
|
|
module->port_num,
|
|
|
|
mca_btl_usnic_component.gid_index, &gid)) {
|
|
|
|
opal_memchecker_base_mem_defined(&gid, sizeof(gid));
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "ibv API failed",
|
2013-07-20 02:13:58 +04:00
|
|
|
true,
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
ibv_get_device_name(module->device),
|
2014-06-18 19:20:50 +04:00
|
|
|
module->if_name,
|
2013-07-20 02:13:58 +04:00
|
|
|
"ibv_query_gid", __FILE__, __LINE__,
|
|
|
|
"Failed to query usNIC GID");
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: GID for %s:%d: subnet 0x%016" PRIx64 ", interface 0x%016" PRIx64,
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
ntoh64(gid.global.subnet_prefix),
|
|
|
|
ntoh64(gid.global.interface_id));
|
|
|
|
module->local_addr.gid = gid;
|
|
|
|
|
|
|
|
/* Extract the MAC address from the interface_id */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_gid_to_mac(&gid, module->local_addr.mac);
|
2013-07-20 02:13:58 +04:00
|
|
|
|
|
|
|
/* Use that MAC address to find the device/port's
|
|
|
|
corresponding IP address */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != opal_btl_usnic_find_ip(module,
|
2013-07-20 02:13:58 +04:00
|
|
|
module->local_addr.mac)) {
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: did not find IP interfaces for %s; skipping",
|
|
|
|
ibv_get_device_name(module->device));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
inet_ntop(AF_INET, &module->if_ipv4_addr,
|
|
|
|
my_ip_string, sizeof(my_ip_string));
|
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: IP address for %s:%d: %s",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
my_ip_string);
|
|
|
|
|
|
|
|
|
|
|
|
/* Get this port's bandwidth */
|
|
|
|
if (0 == module->super.btl_bandwidth) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
|
|
|
opal_common_verbs_port_bw(&port->port_attr,
|
2013-07-20 02:13:58 +04:00
|
|
|
&module->super.btl_bandwidth)) {
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
/* If we don't get OPAL_SUCCESS, then we weren't able
|
2013-07-20 02:13:58 +04:00
|
|
|
to figure out what the bandwidth was of this port.
|
|
|
|
That's a bad sign. Let's ignore this port. */
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "verbs_port_bw failed",
|
2013-07-20 02:13:58 +04:00
|
|
|
true,
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
ibv_get_device_name(module->device),
|
2014-06-18 19:20:50 +04:00
|
|
|
module->if_name);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
}
|
2014-02-26 11:50:26 +04:00
|
|
|
module->local_addr.link_speed_mbps = module->super.btl_bandwidth;
|
2013-07-20 02:13:58 +04:00
|
|
|
opal_output_verbose(5, USNIC_OUT,
|
|
|
|
"btl:usnic: bandwidth for %s:%d = %u",
|
|
|
|
ibv_get_device_name(module->device),
|
|
|
|
module->port_num,
|
|
|
|
module->super.btl_bandwidth);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2013-07-20 02:13:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* utility routine to safely free a filter element array */
|
|
|
|
static void free_filter(usnic_if_filter_t *filter)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (filter == NULL) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NULL != filter->elts) {
|
|
|
|
for (i = 0; i < filter->n_elt; ++i) {
|
|
|
|
if (!filter->elts[i].is_netmask) {
|
|
|
|
free(filter->elts[i].if_name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
free(filter->elts);
|
|
|
|
}
|
|
|
|
free(filter);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Parse a string which is a comma-separated list containing a mix of
|
|
|
|
* interface names and IPv4 CIDR-format netmasks.
|
|
|
|
*
|
|
|
|
* Gracefully tolerates NULL pointer arguments by returning NULL.
|
|
|
|
*
|
|
|
|
* Returns a usnic_if_filter_t, which contains n_elt and a
|
|
|
|
* corresponding array of found filter elements. Caller is
|
|
|
|
* responsible for freeing the returned usnic_if_filter_t, the array
|
|
|
|
* of filter elements, and any strings in it (can do this via
|
|
|
|
* free_filter()).
|
|
|
|
*/
|
|
|
|
static usnic_if_filter_t *parse_ifex_str(const char *orig_str,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
int i, ret;
|
|
|
|
char **argv, *str, *tmp;
|
|
|
|
struct sockaddr_storage argv_inaddr;
|
|
|
|
uint32_t argv_prefix, addr;
|
|
|
|
usnic_if_filter_t *filter;
|
|
|
|
int n_argv;
|
|
|
|
|
|
|
|
if (NULL == orig_str) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get a wrapper for the filter */
|
|
|
|
filter = calloc(sizeof(*filter), 1);
|
|
|
|
if (NULL == filter) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2013-07-20 02:13:58 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
argv = opal_argv_split(orig_str, ',');
|
|
|
|
if (NULL == argv || 0 == (n_argv = opal_argv_count(argv))) {
|
|
|
|
free(filter);
|
2013-08-01 20:56:15 +04:00
|
|
|
opal_argv_free(argv);
|
2013-07-20 02:13:58 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* upper bound: each entry could be a mask */
|
|
|
|
filter->elts = malloc(sizeof(*filter->elts) * n_argv);
|
|
|
|
if (NULL == filter->elts) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2013-07-20 02:13:58 +04:00
|
|
|
free(filter);
|
|
|
|
opal_argv_free(argv);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Shuffle iface names to the beginning of the argv array. Process each
|
|
|
|
* netmask as we encounter it and append the resulting value to netmask_t
|
|
|
|
* array which we will return. */
|
|
|
|
filter->n_elt = 0;
|
|
|
|
for (i = 0; NULL != argv[i]; ++i) {
|
|
|
|
/* assume that all interface names begin with an alphanumeric
|
|
|
|
* character, not a number */
|
|
|
|
if (isalpha(argv[i][0])) {
|
|
|
|
filter->elts[filter->n_elt].is_netmask = false;
|
|
|
|
filter->elts[filter->n_elt].if_name = strdup(argv[i]);
|
|
|
|
opal_output_verbose(20, USNIC_OUT,
|
|
|
|
"btl:usnic:filter_module: parsed %s device name: %s",
|
|
|
|
name, filter->elts[filter->n_elt].if_name);
|
|
|
|
|
|
|
|
++filter->n_elt;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Found a subnet notation. Convert it to an IP
|
|
|
|
address/netmask. Get the prefix first. */
|
|
|
|
argv_prefix = 0;
|
|
|
|
tmp = strdup(argv[i]);
|
|
|
|
str = strchr(argv[i], '/');
|
|
|
|
if (NULL == str) {
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "invalid if_inexclude",
|
2014-07-27 01:48:23 +04:00
|
|
|
true, name, opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
tmp, "Invalid specification (missing \"/\")");
|
|
|
|
free(tmp);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
*str = '\0';
|
|
|
|
argv_prefix = atoi(str + 1);
|
|
|
|
if (argv_prefix < 1 || argv_prefix > 32) {
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "invalid if_inexclude",
|
2014-07-27 01:48:23 +04:00
|
|
|
true, name, opal_process_info.nodename,
|
2013-07-20 02:13:58 +04:00
|
|
|
tmp, "Invalid specification (prefix < 1 or prefix >32)");
|
|
|
|
free(tmp);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now convert the IPv4 address */
|
|
|
|
((struct sockaddr*) &argv_inaddr)->sa_family = AF_INET;
|
|
|
|
ret = inet_pton(AF_INET, argv[i],
|
|
|
|
&((struct sockaddr_in*) &argv_inaddr)->sin_addr);
|
|
|
|
if (1 != ret) {
|
2013-07-22 21:28:23 +04:00
|
|
|
opal_show_help("help-mpi-btl-usnic.txt", "invalid if_inexclude",
|
2014-07-27 01:48:23 +04:00
|
|
|
true, name, opal_process_info.nodename, tmp,
|
2013-07-20 02:13:58 +04:00
|
|
|
"Invalid specification (inet_pton() failed)");
|
|
|
|
free(tmp);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
opal_output_verbose(20, USNIC_OUT,
|
|
|
|
"btl:usnic:filter_module: parsed %s address+prefix: %s / %u",
|
|
|
|
name,
|
|
|
|
opal_net_get_hostname((struct sockaddr*) &argv_inaddr),
|
|
|
|
argv_prefix);
|
|
|
|
|
|
|
|
memcpy(&addr,
|
|
|
|
&((struct sockaddr_in*) &argv_inaddr)->sin_addr,
|
|
|
|
sizeof(addr));
|
|
|
|
|
|
|
|
/* be helpful: if the user passed A.B.C.D/24 instead of A.B.C.0/24,
|
|
|
|
* also normalize the netmask */
|
|
|
|
filter->elts[filter->n_elt].is_netmask = true;
|
|
|
|
filter->elts[filter->n_elt].if_name = NULL;
|
|
|
|
filter->elts[filter->n_elt].addr =
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_get_ipv4_subnet(addr, argv_prefix);
|
2013-07-20 02:13:58 +04:00
|
|
|
filter->elts[filter->n_elt].prefixlen = argv_prefix;
|
|
|
|
++filter->n_elt;
|
|
|
|
|
|
|
|
free(tmp);
|
|
|
|
}
|
|
|
|
assert(i == n_argv); /* sanity */
|
|
|
|
|
|
|
|
opal_argv_free(argv);
|
|
|
|
|
|
|
|
/* don't return an empty filter */
|
|
|
|
if (filter->n_elt == 0) {
|
|
|
|
free_filter(filter);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return filter;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check this module to see if should be kept or not.
|
|
|
|
*/
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static bool filter_module(opal_btl_usnic_module_t *module,
|
2013-07-20 02:13:58 +04:00
|
|
|
usnic_if_filter_t *filter,
|
|
|
|
bool filter_incl)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
uint32_t module_mask;
|
|
|
|
bool match;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
module_mask = opal_btl_usnic_get_ipv4_subnet(module->if_ipv4_addr,
|
2013-07-20 02:13:58 +04:00
|
|
|
module->if_cidrmask);
|
|
|
|
match = false;
|
|
|
|
for (i = 0; i < filter->n_elt; ++i) {
|
|
|
|
if (filter->elts[i].is_netmask) {
|
|
|
|
/* conservative: we also require the prefixlen to match */
|
|
|
|
if (filter->elts[i].prefixlen == module->if_cidrmask &&
|
|
|
|
filter->elts[i].addr == module_mask) {
|
|
|
|
match = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
if (strcmp(filter->elts[i].if_name,
|
|
|
|
ibv_get_device_name(module->device)) == 0) {
|
|
|
|
match = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Turn the match result into whether we should keep it or not */
|
|
|
|
return match ^ !filter_incl;
|
|
|
|
}
|
2013-10-23 19:51:11 +04:00
|
|
|
|
|
|
|
/* could take indent as a parameter instead of hard-coding it */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
static void dump_endpoint(opal_btl_usnic_endpoint_t *endpoint)
|
2013-10-23 19:51:11 +04:00
|
|
|
{
|
|
|
|
int i;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_frag_t *frag;
|
|
|
|
opal_btl_usnic_send_segment_t *sseg;
|
2013-10-23 19:51:11 +04:00
|
|
|
struct in_addr ia;
|
|
|
|
char ep_addr_str[INET_ADDRSTRLEN];
|
|
|
|
char tmp[128], str[2048];
|
|
|
|
|
|
|
|
memset(ep_addr_str, 0x00, sizeof(ep_addr_str));
|
|
|
|
ia.s_addr = endpoint->endpoint_remote_addr.ipv4_addr;
|
|
|
|
inet_ntop(AF_INET, &ia, ep_addr_str, sizeof(ep_addr_str));
|
|
|
|
|
2014-07-31 00:53:41 +04:00
|
|
|
opal_output(0, " endpoint %p, %s job=%u, rank=%u rts=%s s_credits=%"PRIi32"\n",
|
|
|
|
(void *)endpoint, ep_addr_str,
|
|
|
|
opal_process_name_jobid(endpoint->endpoint_proc->proc_opal->proc_name),
|
|
|
|
opal_process_name_vpid(endpoint->endpoint_proc->proc_opal->proc_name),
|
|
|
|
(endpoint->endpoint_ready_to_send ? "true" : "false"),
|
|
|
|
endpoint->endpoint_send_credits);
|
2013-10-23 19:51:11 +04:00
|
|
|
opal_output(0, " endpoint->frag_send_queue:\n");
|
|
|
|
|
|
|
|
OPAL_LIST_FOREACH(frag, &endpoint->endpoint_frag_send_queue,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_frag_t) {
|
|
|
|
opal_btl_usnic_small_send_frag_t *ssfrag;
|
|
|
|
opal_btl_usnic_large_send_frag_t *lsfrag;
|
2014-01-10 19:39:16 +04:00
|
|
|
|
2013-10-23 19:51:11 +04:00
|
|
|
snprintf(str, sizeof(str), " --> frag %p, %s", (void *)frag,
|
|
|
|
usnic_frag_type(frag->uf_type));
|
|
|
|
switch (frag->uf_type) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_FRAG_LARGE_SEND:
|
|
|
|
lsfrag = (opal_btl_usnic_large_send_frag_t *)frag;
|
2013-10-23 19:51:11 +04:00
|
|
|
snprintf(tmp, sizeof(tmp), " tag=%"PRIu8" id=%"PRIu32" offset=%llu/%llu post_cnt=%"PRIu32" ack_bytes_left=%llu\n",
|
|
|
|
lsfrag->lsf_tag,
|
|
|
|
lsfrag->lsf_frag_id,
|
|
|
|
(unsigned long long)lsfrag->lsf_cur_offset,
|
|
|
|
(unsigned long long)lsfrag->lsf_base.sf_size,
|
|
|
|
lsfrag->lsf_base.sf_seg_post_cnt,
|
|
|
|
(unsigned long long)lsfrag->lsf_base.sf_ack_bytes_left);
|
|
|
|
strncat(str, tmp, sizeof(str) - strlen(str) - 1);
|
|
|
|
opal_output(0, "%s", str);
|
|
|
|
|
|
|
|
OPAL_LIST_FOREACH(sseg, &lsfrag->lsf_seg_chain,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_send_segment_t) {
|
2013-10-23 19:51:11 +04:00
|
|
|
/* chunk segs are just typedefs to send segs */
|
|
|
|
opal_output(0, " chunk seg %p, chan=%s hotel=%d times_posted=%"PRIu32" pending=%s\n",
|
|
|
|
(void *)sseg,
|
|
|
|
(USNIC_PRIORITY_CHANNEL == sseg->ss_channel ?
|
|
|
|
"prio" : "data"),
|
|
|
|
sseg->ss_hotel_room,
|
|
|
|
sseg->ss_send_posted,
|
|
|
|
(sseg->ss_ack_pending ? "true" : "false"));
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_FRAG_SMALL_SEND:
|
|
|
|
ssfrag = (opal_btl_usnic_small_send_frag_t *)frag;
|
2013-10-23 19:51:11 +04:00
|
|
|
snprintf(tmp, sizeof(tmp), " sf_size=%llu post_cnt=%"PRIu32" ack_bytes_left=%llu\n",
|
|
|
|
(unsigned long long)ssfrag->ssf_base.sf_size,
|
|
|
|
ssfrag->ssf_base.sf_seg_post_cnt,
|
|
|
|
(unsigned long long)ssfrag->ssf_base.sf_ack_bytes_left);
|
|
|
|
strncat(str, tmp, sizeof(str) - strlen(str) - 1);
|
|
|
|
opal_output(0, "%s", str);
|
|
|
|
|
|
|
|
sseg = &ssfrag->ssf_segment;
|
|
|
|
opal_output(0, " small seg %p, chan=%s hotel=%d times_posted=%"PRIu32" pending=%s\n",
|
|
|
|
(void *)sseg,
|
|
|
|
(USNIC_PRIORITY_CHANNEL == sseg->ss_channel ?
|
|
|
|
"prio" : "data"),
|
|
|
|
sseg->ss_hotel_room,
|
|
|
|
sseg->ss_send_posted,
|
|
|
|
(sseg->ss_ack_pending ? "true" : "false"));
|
|
|
|
break;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
case OPAL_BTL_USNIC_FRAG_PUT_DEST:
|
2013-10-23 19:51:11 +04:00
|
|
|
/* put_dest frags are just a typedef to generic frags */
|
2014-07-10 21:18:03 +04:00
|
|
|
snprintf(tmp, sizeof(tmp), " put_addr=%p\n", frag->uf_remote_seg[0].seg_addr.pval);
|
2013-10-23 19:51:11 +04:00
|
|
|
strncat(str, tmp, sizeof(str) - strlen(str) - 1);
|
|
|
|
opal_output(0, "%s", str);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now examine the hotel for this endpoint and dump any segments we find
|
|
|
|
* there. Yes, this peeks at members that are technically "private", so
|
|
|
|
* eventually this should be done through some sort of debug or iteration
|
|
|
|
* interface in the hotel code. */
|
|
|
|
opal_output(0, " endpoint->endpoint_sent_segs (%p):\n",
|
|
|
|
(void *)endpoint->endpoint_sent_segs);
|
|
|
|
for (i = 0; i < WINDOW_SIZE; ++i) {
|
|
|
|
sseg = endpoint->endpoint_sent_segs[i];
|
|
|
|
if (NULL != sseg) {
|
|
|
|
opal_output(0, " [%d] sseg=%p %s chan=%s hotel=%d times_posted=%"PRIu32" pending=%s\n",
|
|
|
|
i,
|
|
|
|
(void *)sseg,
|
|
|
|
usnic_seg_type(sseg->ss_base.us_type),
|
|
|
|
(USNIC_PRIORITY_CHANNEL == sseg->ss_channel ?
|
|
|
|
"prio" : "data"),
|
|
|
|
sseg->ss_hotel_room,
|
|
|
|
sseg->ss_send_posted,
|
|
|
|
(sseg->ss_ack_pending ? "true" : "false"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-26 11:40:23 +04:00
|
|
|
opal_output(0, " ack_needed=%s n_t=%"UDSEQ" n_a=%"UDSEQ" n_r=%"UDSEQ" n_s=%"UDSEQ" rfstart=%"PRIu32"\n",
|
2013-10-23 19:51:11 +04:00
|
|
|
(endpoint->endpoint_ack_needed?"true":"false"),
|
|
|
|
endpoint->endpoint_next_seq_to_send,
|
|
|
|
endpoint->endpoint_ack_seq_rcvd,
|
|
|
|
endpoint->endpoint_next_contig_seq_to_recv,
|
|
|
|
endpoint->endpoint_highest_seq_rcvd,
|
|
|
|
endpoint->endpoint_rfstart);
|
|
|
|
|
|
|
|
if (dump_bitvectors) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_snprintf_bool_array(str, sizeof(str),
|
2013-10-23 19:51:11 +04:00
|
|
|
endpoint->endpoint_rcvd_segs,
|
|
|
|
WINDOW_SIZE);
|
|
|
|
opal_output(0, " rcvd_segs 0x%s", str);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
void opal_btl_usnic_component_debug(void)
|
2013-10-23 19:51:11 +04:00
|
|
|
{
|
|
|
|
int i;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_module_t *module;
|
|
|
|
opal_btl_usnic_endpoint_t *endpoint;
|
|
|
|
opal_btl_usnic_send_segment_t *sseg;
|
2013-10-23 19:51:11 +04:00
|
|
|
opal_list_item_t *item;
|
2014-07-31 00:53:41 +04:00
|
|
|
const opal_proc_t *proc = opal_proc_local_get();
|
2013-10-23 19:51:11 +04:00
|
|
|
|
2014-07-31 00:53:41 +04:00
|
|
|
opal_output(0, "*** dumping usnic state for MPI_COMM_WORLD rank %u ***\n",
|
|
|
|
opal_process_name_vpid(proc->proc_name));
|
2013-10-23 19:51:11 +04:00
|
|
|
for (i = 0; i < (int)mca_btl_usnic_component.num_modules; ++i) {
|
|
|
|
module = mca_btl_usnic_component.usnic_active_modules[i];
|
|
|
|
|
|
|
|
opal_output(0, "active_modules[%d]=%p %s max{frag,chunk,tiny}=%llu,%llu,%llu\n",
|
|
|
|
i, (void *)module, module->if_name,
|
|
|
|
(unsigned long long)module->max_frag_payload,
|
|
|
|
(unsigned long long)module->max_chunk_payload,
|
|
|
|
(unsigned long long)module->max_tiny_payload);
|
|
|
|
|
|
|
|
opal_output(0, " endpoints_with_sends:\n");
|
|
|
|
OPAL_LIST_FOREACH(endpoint, &module->endpoints_with_sends,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_endpoint_t) {
|
2013-10-23 19:51:11 +04:00
|
|
|
dump_endpoint(endpoint);
|
|
|
|
}
|
|
|
|
|
|
|
|
opal_output(0, " endpoints_that_need_acks:\n");
|
|
|
|
OPAL_LIST_FOREACH(endpoint, &module->endpoints_that_need_acks,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_endpoint_t) {
|
2013-10-23 19:51:11 +04:00
|
|
|
dump_endpoint(endpoint);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* the all_endpoints list uses a different list item member */
|
|
|
|
opal_output(0, " all_endpoints:\n");
|
2014-08-01 02:30:20 +04:00
|
|
|
opal_mutex_lock(&module->all_endpoints_lock);
|
2013-10-23 19:51:11 +04:00
|
|
|
item = opal_list_get_first(&module->all_endpoints);
|
|
|
|
while (item != opal_list_get_end(&module->all_endpoints)) {
|
|
|
|
endpoint = container_of(item, mca_btl_base_endpoint_t,
|
|
|
|
endpoint_endpoint_li);
|
|
|
|
item = opal_list_get_next(item);
|
|
|
|
dump_endpoint(endpoint);
|
|
|
|
}
|
2014-08-01 02:30:20 +04:00
|
|
|
opal_mutex_unlock(&module->all_endpoints_lock);
|
2013-10-23 19:51:11 +04:00
|
|
|
|
|
|
|
opal_output(0, " pending_resend_segs:\n");
|
|
|
|
OPAL_LIST_FOREACH(sseg, &module->pending_resend_segs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_send_segment_t) {
|
2013-10-23 19:51:11 +04:00
|
|
|
opal_output(0, " sseg %p\n", (void *)sseg);
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_usnic_print_stats(module, " manual", /*reset=*/false);
|
2013-10-23 19:51:11 +04:00
|
|
|
}
|
|
|
|
}
|
2014-02-26 11:48:05 +04:00
|
|
|
|
|
|
|
#include "test/btl_usnic_component_test.h"
|