2012-07-18 21:29:37 +04:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2005-07-01 01:28:35 +04:00
|
|
|
/*
|
2010-03-13 02:57:50 +03:00
|
|
|
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
|
2005-11-05 22:57:48 +03:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2013-07-04 12:34:37 +04:00
|
|
|
* Copyright (c) 2004-2013 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2007-09-24 14:11:52 +04:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
2005-07-01 01:28:35 +04:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2013-02-13 20:31:59 +04:00
|
|
|
* Copyright (c) 2007-2013 Cisco Systems, Inc. All rights reserved.
|
2013-02-12 21:45:27 +04:00
|
|
|
* Copyright (c) 2006-2009 Mellanox Technologies. All rights reserved.
|
2014-03-27 21:56:00 +04:00
|
|
|
* Copyright (c) 2006-2014 Los Alamos National Security, LLC. All rights
|
2007-09-24 14:11:52 +04:00
|
|
|
* reserved.
|
|
|
|
* Copyright (c) 2006-2007 Voltaire All rights reserved.
|
2012-03-01 21:29:40 +04:00
|
|
|
* Copyright (c) 2008-2012 Oracle and/or its affiliates. All rights reserved.
|
2010-02-22 11:14:45 +03:00
|
|
|
* Copyright (c) 2009 IBM Corporation. All rights reserved.
|
2014-10-04 01:19:48 +04:00
|
|
|
* Copyright (c) 2013-2014 Intel, Inc. All rights reserved
|
2013-08-30 18:53:59 +04:00
|
|
|
* Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
|
2014-12-09 12:43:15 +03:00
|
|
|
* Copyright (c) 2014-2015 Research Organization for Information Science
|
2014-06-02 06:23:52 +04:00
|
|
|
* and Technology (RIST). All rights reserved.
|
2014-12-09 12:43:15 +03:00
|
|
|
* Copyright (c) 2014 Bull SAS. All rights reserved
|
2005-07-01 01:28:35 +04:00
|
|
|
* $COPYRIGHT$
|
2007-09-24 14:11:52 +04:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* Additional copyrights may follow
|
2007-09-24 14:11:52 +04:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
2005-07-01 01:28:35 +04:00
|
|
|
#include <string.h>
|
2014-03-27 21:56:00 +04:00
|
|
|
#include "opal_stdint.h"
|
2009-03-04 01:25:13 +03:00
|
|
|
#include "opal/class/opal_bitmap.h"
|
2009-02-14 05:26:12 +03:00
|
|
|
#include "opal/util/output.h"
|
2008-04-18 00:43:56 +04:00
|
|
|
#include "opal/util/arch.h"
|
2014-10-04 01:19:48 +04:00
|
|
|
#include "opal/util/proc.h"
|
2011-02-23 18:50:37 +03:00
|
|
|
#include "opal/include/opal_stdint.h"
|
2013-02-13 20:31:59 +04:00
|
|
|
#include "opal/util/show_help.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/btl/btl.h"
|
|
|
|
#include "opal/mca/btl/base/btl_base_error.h"
|
2008-04-18 00:43:56 +04:00
|
|
|
|
2010-03-13 02:57:50 +03:00
|
|
|
#if OPAL_ENABLE_FT_CR == 1
|
2014-07-27 01:48:23 +04:00
|
|
|
#include "opal/runtime/opal_cr.h"
|
2008-10-16 19:09:00 +04:00
|
|
|
#endif
|
|
|
|
|
2009-12-15 17:25:07 +03:00
|
|
|
#include "btl_openib_ini.h"
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
#include "btl_openib.h"
|
2008-01-21 15:11:18 +03:00
|
|
|
#include "btl_openib_frag.h"
|
2005-07-01 01:28:35 +04:00
|
|
|
#include "btl_openib_proc.h"
|
|
|
|
#include "btl_openib_endpoint.h"
|
2007-11-28 10:18:59 +03:00
|
|
|
#include "btl_openib_xrc.h"
|
2014-01-06 23:51:30 +04:00
|
|
|
#include "btl_openib_async.h"
|
|
|
|
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
#include "opal/datatype/opal_convertor.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/mpool/base/base.h"
|
|
|
|
#include "opal/mca/mpool/mpool.h"
|
|
|
|
#include "opal/mca/mpool/grdma/mpool_grdma.h"
|
2013-01-28 03:25:10 +04:00
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2013-01-18 02:34:43 +04:00
|
|
|
#include "opal/datatype/opal_datatype_cuda.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/common/cuda/common_cuda.h"
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2013-01-28 03:25:10 +04:00
|
|
|
|
2014-08-05 09:35:57 +04:00
|
|
|
#include "opal/util/sys_limits.h"
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
#include <errno.h>
|
2012-07-19 21:52:21 +04:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <fcntl.h>
|
2008-01-21 15:11:18 +03:00
|
|
|
#include <string.h>
|
2005-10-02 22:58:57 +04:00
|
|
|
#include <math.h>
|
2007-01-31 00:22:56 +03:00
|
|
|
#ifdef HAVE_SYS_TYPES_H
|
|
|
|
#include <sys/types.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_SYS_TIME_H
|
|
|
|
#include <sys/time.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_SYS_RESOURCE_H
|
|
|
|
#include <sys/resource.h>
|
|
|
|
#endif
|
2010-07-12 20:17:56 +04:00
|
|
|
#ifdef HAVE_UNISTD_H
|
2008-02-28 04:57:57 +03:00
|
|
|
#include <unistd.h>
|
2010-07-12 20:17:56 +04:00
|
|
|
#endif
|
2012-07-19 21:52:21 +04:00
|
|
|
#ifdef OPAL_HAVE_HWLOC
|
|
|
|
#include "opal/mca/hwloc/hwloc.h"
|
|
|
|
#endif
|
|
|
|
|
2011-02-04 02:53:21 +03:00
|
|
|
#ifndef MIN
|
|
|
|
#define MIN(a,b) ((a)<(b)?(a):(b))
|
|
|
|
#endif
|
2007-01-31 00:22:56 +03:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
mca_btl_openib_module_t mca_btl_openib_module = {
|
2014-04-14 23:29:26 +04:00
|
|
|
.super = {
|
|
|
|
.btl_component = &mca_btl_openib_component.super,
|
|
|
|
.btl_add_procs = mca_btl_openib_add_procs,
|
|
|
|
.btl_del_procs = mca_btl_openib_del_procs,
|
|
|
|
.btl_finalize = mca_btl_openib_finalize,
|
2008-01-21 15:11:18 +03:00
|
|
|
/* we need alloc free, pack */
|
2014-04-14 23:29:26 +04:00
|
|
|
.btl_alloc = mca_btl_openib_alloc,
|
|
|
|
.btl_free = mca_btl_openib_free,
|
|
|
|
.btl_prepare_src = mca_btl_openib_prepare_src,
|
2014-11-20 09:22:43 +03:00
|
|
|
.btl_prepare_dst = mca_btl_openib_prepare_dst,
|
2014-04-14 23:29:26 +04:00
|
|
|
.btl_send = mca_btl_openib_send,
|
|
|
|
.btl_sendi = mca_btl_openib_sendi, /* send immediate */
|
|
|
|
.btl_put = mca_btl_openib_put,
|
|
|
|
.btl_get = mca_btl_openib_get,
|
|
|
|
.btl_dump = mca_btl_base_dump,
|
|
|
|
.btl_register_error = mca_btl_openib_register_error_cb, /* error call back registration */
|
2014-11-20 09:22:43 +03:00
|
|
|
.btl_ft_event = mca_btl_openib_ft_event
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2009-12-15 17:25:07 +03:00
|
|
|
char* const mca_btl_openib_transport_name_strings[MCA_BTL_OPENIB_TRANSPORT_SIZE] = {
|
|
|
|
"MCA_BTL_OPENIB_TRANSPORT_IB",
|
|
|
|
"MCA_BTL_OPENIB_TRANSPORT_IWARP",
|
|
|
|
"MCA_BTL_OPENIB_TRANSPORT_RDMAOE",
|
|
|
|
"MCA_BTL_OPENIB_TRANSPORT_UNKNOWN"
|
|
|
|
};
|
|
|
|
|
2008-11-10 21:35:57 +03:00
|
|
|
static int mca_btl_openib_finalize_resources(struct mca_btl_base_module_t* btl);
|
2008-10-16 19:09:00 +04:00
|
|
|
|
2009-06-18 16:24:39 +04:00
|
|
|
void mca_btl_openib_show_init_error(const char *file, int line,
|
|
|
|
const char *func, const char *dev)
|
2007-01-25 01:25:40 +03:00
|
|
|
{
|
|
|
|
if (ENOMEM == errno) {
|
2007-01-31 00:22:56 +03:00
|
|
|
int ret;
|
|
|
|
struct rlimit limit;
|
|
|
|
char *str_limit = NULL;
|
|
|
|
|
2008-11-25 06:13:09 +03:00
|
|
|
#if HAVE_DECL_RLIMIT_MEMLOCK
|
2007-01-31 00:22:56 +03:00
|
|
|
ret = getrlimit(RLIMIT_MEMLOCK, &limit);
|
2008-11-25 06:13:09 +03:00
|
|
|
#else
|
|
|
|
ret = -1;
|
|
|
|
#endif
|
2007-01-31 00:22:56 +03:00
|
|
|
if (0 != ret) {
|
|
|
|
asprintf(&str_limit, "Unknown");
|
|
|
|
} else if (limit.rlim_cur == RLIM_INFINITY) {
|
|
|
|
asprintf(&str_limit, "unlimited");
|
|
|
|
} else {
|
2007-02-01 22:07:04 +03:00
|
|
|
asprintf(&str_limit, "%ld", (long)limit.rlim_cur);
|
2007-01-31 00:22:56 +03:00
|
|
|
}
|
|
|
|
|
2013-02-13 01:10:11 +04:00
|
|
|
opal_show_help("help-mpi-btl-openib.txt", "init-fail-no-mem",
|
2014-10-04 01:19:48 +04:00
|
|
|
true, opal_process_info.nodename,
|
2007-01-31 00:22:56 +03:00
|
|
|
file, line, func, dev, str_limit);
|
|
|
|
|
|
|
|
if (NULL != str_limit) free(str_limit);
|
2007-01-25 01:25:40 +03:00
|
|
|
} else {
|
2013-02-13 01:10:11 +04:00
|
|
|
opal_show_help("help-mpi-btl-openib.txt", "init-fail-create-q",
|
2014-10-04 01:19:48 +04:00
|
|
|
true, opal_process_info.nodename,
|
2007-01-25 01:25:40 +03:00
|
|
|
file, line, func, strerror(errno), errno, dev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-06-02 16:54:56 +04:00
|
|
|
static inline struct ibv_cq *create_cq_compat(struct ibv_context *context,
|
2008-01-09 13:05:41 +03:00
|
|
|
int cqe, void *cq_context, struct ibv_comp_channel *channel,
|
|
|
|
int comp_vector)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_IBV_CREATE_CQ_ARGS == 3
|
2008-01-09 13:05:41 +03:00
|
|
|
return ibv_create_cq(context, cqe, channel);
|
|
|
|
#else
|
2008-01-21 15:11:18 +03:00
|
|
|
return ibv_create_cq(context, cqe, cq_context, channel, comp_vector);
|
2008-01-09 13:05:41 +03:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
static int adjust_cq(mca_btl_openib_device_t *device, const int cq)
|
2008-01-09 13:05:41 +03:00
|
|
|
{
|
2012-07-10 20:53:27 +04:00
|
|
|
uint32_t cq_size = device->cq_size[cq];
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
/* make sure we don't exceed the maximum CQ size and that we
|
|
|
|
* don't size the queue smaller than otherwise requested
|
|
|
|
*/
|
|
|
|
if(cq_size < mca_btl_openib_component.ib_cq_size[cq])
|
|
|
|
cq_size = mca_btl_openib_component.ib_cq_size[cq];
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
if(cq_size > (uint32_t)device->ib_dev_attr.max_cqe)
|
|
|
|
cq_size = device->ib_dev_attr.max_cqe;
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
if(NULL == device->ib_cq[cq]) {
|
|
|
|
device->ib_cq[cq] = create_cq_compat(device->ib_dev_context, cq_size,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_ENABLE_PROGRESS_THREADS == 1
|
2012-07-10 20:53:27 +04:00
|
|
|
device, device->ib_channel,
|
2014-06-26 00:43:28 +04:00
|
|
|
#else
|
|
|
|
NULL, NULL,
|
|
|
|
#endif
|
2012-07-10 20:53:27 +04:00
|
|
|
0);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-07-23 04:28:59 +04:00
|
|
|
if (NULL == device->ib_cq[cq]) {
|
2009-06-18 16:24:39 +04:00
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__, "ibv_create_cq",
|
|
|
|
ibv_get_device_name(device->ib_dev));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_ENABLE_PROGRESS_THREADS == 1
|
2008-07-23 04:28:59 +04:00
|
|
|
if(ibv_req_notify_cq(device->ib_cq[cq], 0)) {
|
2009-06-18 16:24:39 +04:00
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__,
|
|
|
|
"ibv_req_notify_cq",
|
|
|
|
ibv_get_device_name(device->ib_dev));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
|
|
|
|
2008-07-23 04:28:59 +04:00
|
|
|
OPAL_THREAD_LOCK(&device->device_lock);
|
|
|
|
if (!device->progress) {
|
2008-01-09 13:05:41 +03:00
|
|
|
int rc;
|
2008-07-23 04:28:59 +04:00
|
|
|
device->progress = true;
|
|
|
|
if(OPAL_SUCCESS != (rc = opal_thread_start(&device->thread))) {
|
2008-01-09 13:05:41 +03:00
|
|
|
BTL_ERROR(("Unable to create progress thread, retval=%d", rc));
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
2008-07-23 04:28:59 +04:00
|
|
|
OPAL_THREAD_UNLOCK(&device->device_lock);
|
2014-06-26 00:43:28 +04:00
|
|
|
#endif
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
2012-07-10 20:53:27 +04:00
|
|
|
#ifdef HAVE_IBV_RESIZE_CQ
|
|
|
|
else if (cq_size > mca_btl_openib_component.ib_cq_size[cq]){
|
|
|
|
int rc;
|
|
|
|
rc = ibv_resize_cq(device->ib_cq[cq], cq_size);
|
|
|
|
/* For ConnectX the resize CQ is not implemented and verbs returns -ENOSYS
|
|
|
|
* but should return ENOSYS. So it is reason for abs */
|
|
|
|
if(rc && ENOSYS != abs(rc)) {
|
|
|
|
BTL_ERROR(("cannot resize completion queue, error: %d", rc));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2012-07-10 20:53:27 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2008-01-09 13:05:41 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
2010-01-07 20:39:18 +03:00
|
|
|
|
|
|
|
|
|
|
|
/* In this function we check if the device supports srq limit
|
|
|
|
event. We create the temporary srq, post some receive buffers - in
|
|
|
|
order to prevent srq limit event immediately and call the
|
|
|
|
"ibv_modify_srq" function. If a return value of the function not
|
|
|
|
success => our decision that the device doesn't support this
|
|
|
|
capability. */
|
2009-12-16 17:05:35 +03:00
|
|
|
static int check_if_device_support_modify_srq(mca_btl_openib_module_t *openib_btl)
|
|
|
|
{
|
|
|
|
char buff;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int rc = OPAL_SUCCESS;
|
2009-12-16 17:05:35 +03:00
|
|
|
|
|
|
|
struct ibv_srq* dummy_srq = NULL;
|
|
|
|
struct ibv_srq_attr modify_attr;
|
|
|
|
|
|
|
|
struct ibv_sge sge_elem;
|
2010-01-07 20:39:18 +03:00
|
|
|
struct ibv_recv_wr wr1, wr2, *bad_wr;
|
2009-12-16 17:05:35 +03:00
|
|
|
|
|
|
|
struct ibv_srq_init_attr init_attr;
|
|
|
|
memset(&init_attr, 0, sizeof(struct ibv_srq_init_attr));
|
|
|
|
|
|
|
|
init_attr.attr.max_wr = 3;
|
|
|
|
init_attr.attr.max_sge = 1;
|
|
|
|
|
|
|
|
dummy_srq = ibv_create_srq(openib_btl->device->ib_pd, &init_attr);
|
|
|
|
if(NULL == dummy_srq) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2010-04-12 15:28:44 +04:00
|
|
|
return rc;
|
2009-12-16 17:05:35 +03:00
|
|
|
}
|
|
|
|
|
2014-01-20 19:44:45 +04:00
|
|
|
sge_elem.addr = (uint64_t)((uintptr_t) &buff);
|
2009-12-16 17:05:35 +03:00
|
|
|
sge_elem.length = sizeof(buff);
|
|
|
|
|
|
|
|
wr1.num_sge = wr2.num_sge = 1;
|
|
|
|
wr1.sg_list = wr2.sg_list = &sge_elem;
|
|
|
|
|
|
|
|
wr1.next = &wr2;
|
|
|
|
wr2.next = NULL;
|
|
|
|
|
|
|
|
if(ibv_post_srq_recv(dummy_srq, &wr1, &bad_wr)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2009-12-16 17:05:35 +03:00
|
|
|
goto destroy_dummy_srq;
|
|
|
|
}
|
|
|
|
|
|
|
|
modify_attr.max_wr = 2;
|
|
|
|
modify_attr.max_sge = 1;
|
|
|
|
modify_attr.srq_limit = 1;
|
|
|
|
|
|
|
|
if(ibv_modify_srq(dummy_srq, &modify_attr, IBV_SRQ_LIMIT)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERR_NOT_SUPPORTED;
|
2009-12-16 17:05:35 +03:00
|
|
|
goto destroy_dummy_srq;
|
|
|
|
}
|
|
|
|
|
|
|
|
destroy_dummy_srq:
|
|
|
|
if(ibv_destroy_srq(dummy_srq)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2009-12-16 17:05:35 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
/*
|
|
|
|
* create both the high and low priority completion queues
|
2008-01-09 13:05:41 +03:00
|
|
|
* and the shared receive queue (if requested)
|
2008-01-21 15:11:18 +03:00
|
|
|
*/
|
2008-01-09 13:05:41 +03:00
|
|
|
static int create_srq(mca_btl_openib_module_t *openib_btl)
|
|
|
|
{
|
2009-12-16 17:05:35 +03:00
|
|
|
int qp, rc = 0;
|
|
|
|
int32_t rd_num, rd_curr_num;
|
|
|
|
|
|
|
|
bool device_support_modify_srq = true;
|
|
|
|
|
|
|
|
/* Check if our device supports modify srq ability */
|
|
|
|
rc = check_if_device_support_modify_srq(openib_btl);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(OPAL_ERR_NOT_SUPPORTED == rc) {
|
2009-12-16 17:05:35 +03:00
|
|
|
device_support_modify_srq = false;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
} else if(OPAL_SUCCESS != rc) {
|
2009-12-16 17:05:35 +03:00
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__,
|
|
|
|
"ibv_create_srq",
|
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev));
|
2010-01-14 19:09:10 +03:00
|
|
|
return rc;
|
2009-12-16 17:05:35 +03:00
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
|
|
|
|
/* create the SRQ's */
|
2008-01-21 15:11:18 +03:00
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++) {
|
|
|
|
struct ibv_srq_init_attr attr;
|
2009-12-16 17:05:35 +03:00
|
|
|
memset(&attr, 0, sizeof(struct ibv_srq_init_attr));
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
if(!BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
attr.attr.max_wr = mca_btl_openib_component.qp_infos[qp].rd_num +
|
2008-01-09 13:05:41 +03:00
|
|
|
mca_btl_openib_component.qp_infos[qp].u.srq_qp.sd_max;
|
2008-06-11 20:31:39 +04:00
|
|
|
attr.attr.max_sge = 1;
|
2008-01-09 13:05:41 +03:00
|
|
|
openib_btl->qps[qp].u.srq_qp.rd_posted = 0;
|
|
|
|
#if HAVE_XRC
|
|
|
|
if(BTL_OPENIB_QP_TYPE_XRC(qp)) {
|
2015-01-07 07:27:25 +03:00
|
|
|
#if OPAL_HAVE_CONNECTX_XRC_DOMAINS
|
2014-12-09 12:43:15 +03:00
|
|
|
struct ibv_srq_init_attr_ex attr_ex;
|
|
|
|
memset(&attr_ex, 0, sizeof(struct ibv_srq_init_attr_ex));
|
|
|
|
attr_ex.attr.max_wr = attr.attr.max_wr;
|
|
|
|
attr_ex.attr.max_sge = attr.attr.max_sge;
|
|
|
|
attr_ex.comp_mask = IBV_SRQ_INIT_ATTR_TYPE | IBV_SRQ_INIT_ATTR_XRCD |
|
|
|
|
IBV_SRQ_INIT_ATTR_CQ | IBV_SRQ_INIT_ATTR_PD;
|
|
|
|
attr_ex.srq_type = IBV_SRQT_XRC;
|
|
|
|
attr_ex.xrcd = openib_btl->device->xrcd;
|
|
|
|
attr_ex.cq = openib_btl->device->ib_cq[qp_cq_prio(qp)];
|
|
|
|
attr_ex.pd = openib_btl->device->ib_pd;
|
|
|
|
|
|
|
|
openib_btl->qps[qp].u.srq_qp.srq =
|
|
|
|
ibv_create_srq_ex(openib_btl->device->ib_dev_context, &attr_ex);
|
|
|
|
#else
|
2008-01-09 13:05:41 +03:00
|
|
|
openib_btl->qps[qp].u.srq_qp.srq =
|
2008-07-23 04:28:59 +04:00
|
|
|
ibv_create_xrc_srq(openib_btl->device->ib_pd,
|
|
|
|
openib_btl->device->xrc_domain,
|
|
|
|
openib_btl->device->ib_cq[qp_cq_prio(qp)], &attr);
|
2014-12-09 12:43:15 +03:00
|
|
|
#endif
|
2008-01-09 13:05:41 +03:00
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
openib_btl->qps[qp].u.srq_qp.srq =
|
2008-07-23 04:28:59 +04:00
|
|
|
ibv_create_srq(openib_btl->device->ib_pd, &attr);
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
if (NULL == openib_btl->qps[qp].u.srq_qp.srq) {
|
2009-06-18 16:24:39 +04:00
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__,
|
|
|
|
"ibv_create_srq",
|
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
2009-12-15 18:52:10 +03:00
|
|
|
|
2010-02-18 12:48:16 +03:00
|
|
|
{
|
|
|
|
opal_mutex_t *lock = &mca_btl_openib_component.srq_manager.lock;
|
|
|
|
opal_hash_table_t *srq_addr_table = &mca_btl_openib_component.srq_manager.srq_addr_table;
|
|
|
|
|
|
|
|
opal_mutex_lock(lock);
|
|
|
|
if (OPAL_SUCCESS != opal_hash_table_set_value_ptr(
|
|
|
|
srq_addr_table, &openib_btl->qps[qp].u.srq_qp.srq,
|
|
|
|
sizeof(struct ibv_srq*), (void*) openib_btl)) {
|
|
|
|
BTL_ERROR(("SRQ Internal error."
|
|
|
|
" Failed to add element to mca_btl_openib_component.srq_manager.srq_addr_table\n"));
|
|
|
|
|
|
|
|
opal_mutex_unlock(lock);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2010-02-18 12:48:16 +03:00
|
|
|
}
|
|
|
|
opal_mutex_unlock(lock);
|
|
|
|
}
|
2009-12-15 18:52:10 +03:00
|
|
|
rd_num = mca_btl_openib_component.qp_infos[qp].rd_num;
|
|
|
|
rd_curr_num = openib_btl->qps[qp].u.srq_qp.rd_curr_num = mca_btl_openib_component.qp_infos[qp].u.srq_qp.rd_init;
|
|
|
|
|
2009-12-16 17:05:35 +03:00
|
|
|
if(true == mca_btl_openib_component.enable_srq_resize &&
|
|
|
|
true == device_support_modify_srq) {
|
2009-12-15 18:52:10 +03:00
|
|
|
if(0 == rd_curr_num) {
|
|
|
|
openib_btl->qps[qp].u.srq_qp.rd_curr_num = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
openib_btl->qps[qp].u.srq_qp.rd_low_local = rd_curr_num - (rd_curr_num >> 2);
|
|
|
|
openib_btl->qps[qp].u.srq_qp.srq_limit_event_flag = true;
|
|
|
|
} else {
|
|
|
|
openib_btl->qps[qp].u.srq_qp.rd_curr_num = rd_num;
|
|
|
|
openib_btl->qps[qp].u.srq_qp.rd_low_local = mca_btl_openib_component.qp_infos[qp].rd_low;
|
|
|
|
/* Not used in this case, but we don't need a garbage */
|
|
|
|
mca_btl_openib_component.qp_infos[qp].u.srq_qp.srq_limit = 0;
|
|
|
|
openib_btl->qps[qp].u.srq_qp.srq_limit_event_flag = false;
|
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-01-09 13:05:41 +03:00
|
|
|
}
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
static int mca_btl_openib_size_queues(struct mca_btl_openib_module_t* openib_btl, size_t nprocs)
|
2008-01-09 13:05:41 +03:00
|
|
|
{
|
|
|
|
uint32_t send_cqes, recv_cqes;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int rc = OPAL_SUCCESS, qp;
|
2008-07-23 04:28:59 +04:00
|
|
|
mca_btl_openib_device_t *device = openib_btl->device;
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
/* figure out reasonable sizes for completion queues */
|
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++) {
|
|
|
|
if(BTL_OPENIB_QP_TYPE_SRQ(qp)) {
|
|
|
|
send_cqes = mca_btl_openib_component.qp_infos[qp].u.srq_qp.sd_max;
|
|
|
|
recv_cqes = mca_btl_openib_component.qp_infos[qp].rd_num;
|
|
|
|
} else {
|
|
|
|
send_cqes = (mca_btl_openib_component.qp_infos[qp].rd_num +
|
|
|
|
mca_btl_openib_component.qp_infos[qp].u.pp_qp.rd_rsv) * nprocs;
|
|
|
|
recv_cqes = send_cqes;
|
|
|
|
}
|
|
|
|
openib_btl->device->cq_size[qp_cq_prio(qp)] += recv_cqes;
|
|
|
|
openib_btl->device->cq_size[BTL_OPENIB_LP_CQ] += send_cqes;
|
|
|
|
}
|
|
|
|
|
|
|
|
rc = adjust_cq(device, BTL_OPENIB_HP_CQ);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2008-01-09 13:05:41 +03:00
|
|
|
goto out;
|
2008-10-15 14:37:20 +04:00
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2012-07-10 20:53:27 +04:00
|
|
|
rc = adjust_cq(device, BTL_OPENIB_LP_CQ);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2008-01-09 13:05:41 +03:00
|
|
|
goto out;
|
2008-10-15 14:37:20 +04:00
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
if (0 == openib_btl->num_peers &&
|
|
|
|
(mca_btl_openib_component.num_srq_qps > 0 ||
|
2010-01-14 19:09:10 +03:00
|
|
|
mca_btl_openib_component.num_xrc_qps > 0)) {
|
|
|
|
rc = create_srq(openib_btl);
|
2008-10-15 14:37:20 +04:00
|
|
|
}
|
2008-01-09 13:05:41 +03:00
|
|
|
|
|
|
|
openib_btl->num_peers += nprocs;
|
2008-10-15 14:37:20 +04:00
|
|
|
out:
|
2008-01-09 13:05:41 +03:00
|
|
|
return rc;
|
|
|
|
}
|
2007-01-25 01:25:40 +03:00
|
|
|
|
2009-12-15 17:25:07 +03:00
|
|
|
mca_btl_openib_transport_type_t mca_btl_openib_get_transport_type(mca_btl_openib_module_t* openib_btl)
|
|
|
|
{
|
2011-07-04 18:00:41 +04:00
|
|
|
/* If we have a driver with RDMAoE supporting as the device struct contains the same type (IB) for
|
2009-12-15 17:25:07 +03:00
|
|
|
IBV_LINK_LAYER_INFINIBAND and IBV_LINK_LAYER_ETHERNET link layers and the single way
|
|
|
|
to detect this fact is to check their link_layer fields in a port_attr struct.
|
|
|
|
If our driver doesn't support this feature => the checking of transport type in device struct will be enough.
|
|
|
|
If the driver doesn't support completely transport types =>
|
|
|
|
our assumption that it is very old driver - that supports IB devices only */
|
|
|
|
|
|
|
|
#ifdef HAVE_STRUCT_IBV_DEVICE_TRANSPORT_TYPE
|
|
|
|
switch(openib_btl->device->ib_dev->transport_type) {
|
|
|
|
case IBV_TRANSPORT_IB:
|
2013-08-22 21:44:20 +04:00
|
|
|
#if HAVE_DECL_IBV_LINK_LAYER_ETHERNET
|
2009-12-15 17:25:07 +03:00
|
|
|
switch(openib_btl->ib_port_attr.link_layer) {
|
|
|
|
case IBV_LINK_LAYER_ETHERNET:
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_RDMAOE;
|
|
|
|
|
|
|
|
case IBV_LINK_LAYER_INFINIBAND:
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_IB;
|
|
|
|
/* It is not possible that a device struct contains
|
|
|
|
IB transport and port was configured to IBV_LINK_LAYER_UNSPECIFIED */
|
|
|
|
case IBV_LINK_LAYER_UNSPECIFIED:
|
|
|
|
default:
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_UNKNOWN;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_IB;
|
|
|
|
|
|
|
|
case IBV_TRANSPORT_IWARP:
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_IWARP;
|
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
case IBV_TRANSPORT_UNKNOWN:
|
2009-12-15 17:25:07 +03:00
|
|
|
default:
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_UNKNOWN;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
return MCA_BTL_OPENIB_TRANSPORT_IB;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
static int mca_btl_openib_tune_endpoint(mca_btl_openib_module_t* openib_btl,
|
2009-12-15 17:25:07 +03:00
|
|
|
mca_btl_base_endpoint_t* endpoint)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_ini_values_t values;
|
2009-12-15 17:25:07 +03:00
|
|
|
char* recv_qps = NULL;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int ret;
|
2009-12-15 17:25:07 +03:00
|
|
|
|
|
|
|
if(mca_btl_openib_get_transport_type(openib_btl) != endpoint->rem_info.rem_transport_type) {
|
2013-02-13 01:10:11 +04:00
|
|
|
opal_show_help("help-mpi-btl-openib.txt",
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
"conflicting transport types", true,
|
2014-10-04 01:19:48 +04:00
|
|
|
opal_process_info.nodename,
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev),
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_id,
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_part_id,
|
|
|
|
mca_btl_openib_transport_name_strings[mca_btl_openib_get_transport_type(openib_btl)],
|
2014-10-04 01:19:48 +04:00
|
|
|
opal_get_proc_hostname(endpoint->endpoint_proc->proc_opal),
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
endpoint->rem_info.rem_vendor_id,
|
|
|
|
endpoint->rem_info.rem_vendor_part_id,
|
|
|
|
mca_btl_openib_transport_name_strings[endpoint->rem_info.rem_transport_type]);
|
2011-07-04 18:00:41 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
memset(&values, 0, sizeof(opal_btl_openib_ini_values_t));
|
|
|
|
ret = opal_btl_openib_ini_query(endpoint->rem_info.rem_vendor_id,
|
2009-12-15 17:25:07 +03:00
|
|
|
endpoint->rem_info.rem_vendor_part_id, &values);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != ret &&
|
|
|
|
OPAL_ERR_NOT_FOUND != ret) {
|
2013-02-13 01:10:11 +04:00
|
|
|
opal_show_help("help-mpi-btl-openib.txt",
|
2009-12-15 17:25:07 +03:00
|
|
|
"error in device init", true,
|
2014-10-04 01:19:48 +04:00
|
|
|
opal_process_info.nodename,
|
2009-12-15 17:25:07 +03:00
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if(openib_btl->device->mtu < endpoint->rem_info.rem_mtu) {
|
2011-07-04 18:00:41 +04:00
|
|
|
endpoint->rem_info.rem_mtu = openib_btl->device->mtu;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
endpoint->use_eager_rdma = openib_btl->device->use_eager_rdma &
|
|
|
|
endpoint->use_eager_rdma;
|
|
|
|
|
|
|
|
/* Receive queues checking */
|
|
|
|
|
|
|
|
/* In this check we assume that the command line or INI file parameters are the same
|
|
|
|
for all processes on all machines. The assumption is correct for 99.9999% of users,
|
|
|
|
if a user distributes different INI files or parameters for different node/procs,
|
|
|
|
it is on his own responsibility */
|
|
|
|
switch(mca_btl_openib_component.receive_queues_source) {
|
2014-11-05 00:25:02 +03:00
|
|
|
case MCA_BASE_VAR_SOURCE_COMMAND_LINE:
|
|
|
|
case MCA_BASE_VAR_SOURCE_ENV:
|
|
|
|
case MCA_BASE_VAR_SOURCE_FILE:
|
|
|
|
case MCA_BASE_VAR_SOURCE_SET:
|
|
|
|
case MCA_BASE_VAR_SOURCE_OVERRIDE:
|
|
|
|
break;
|
2009-12-15 17:25:07 +03:00
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
/* If the queues configuration was set from command line
|
2009-12-15 17:25:07 +03:00
|
|
|
(with --mca btl_openib_receive_queues parameter) => both sides have a same configuration */
|
|
|
|
|
|
|
|
/* In this case the local queues configuration was gotten from INI file =>
|
2011-07-04 18:00:41 +04:00
|
|
|
not possible that remote side got its queues configuration from command line =>
|
2009-12-15 17:25:07 +03:00
|
|
|
(by prio) the configuration was set from INI file or (if not configure)
|
|
|
|
by default queues configuration */
|
2014-11-05 00:25:02 +03:00
|
|
|
case BTL_OPENIB_RQ_SOURCE_DEVICE_INI:
|
|
|
|
if(NULL != values.receive_queues) {
|
|
|
|
recv_qps = values.receive_queues;
|
|
|
|
} else {
|
|
|
|
recv_qps = mca_btl_openib_component.default_recv_qps;
|
|
|
|
}
|
2009-12-15 17:25:07 +03:00
|
|
|
|
2014-11-05 00:25:02 +03:00
|
|
|
if(0 != strcmp(mca_btl_openib_component.receive_queues,
|
|
|
|
recv_qps)) {
|
|
|
|
opal_show_help("help-mpi-btl-openib.txt",
|
|
|
|
"unsupported queues configuration", true,
|
|
|
|
opal_process_info.nodename,
|
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev),
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_id,
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_part_id,
|
|
|
|
mca_btl_openib_component.receive_queues,
|
|
|
|
opal_get_proc_hostname(endpoint->endpoint_proc->proc_opal),
|
|
|
|
endpoint->rem_info.rem_vendor_id,
|
|
|
|
endpoint->rem_info.rem_vendor_part_id,
|
|
|
|
recv_qps);
|
2009-12-15 17:25:07 +03:00
|
|
|
|
2014-11-05 00:25:02 +03:00
|
|
|
return OPAL_ERROR;
|
|
|
|
}
|
|
|
|
break;
|
2009-12-15 17:25:07 +03:00
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
/* If the local queues configuration was set
|
2009-12-15 17:25:07 +03:00
|
|
|
by default queues => check all possible cases for remote side and compare */
|
2014-11-05 00:25:02 +03:00
|
|
|
case MCA_BASE_VAR_SOURCE_DEFAULT:
|
|
|
|
if(NULL != values.receive_queues) {
|
|
|
|
if(0 != strcmp(mca_btl_openib_component.receive_queues,
|
|
|
|
values.receive_queues)) {
|
|
|
|
opal_show_help("help-mpi-btl-openib.txt",
|
2009-12-15 17:25:07 +03:00
|
|
|
"unsupported queues configuration", true,
|
2014-10-04 01:19:48 +04:00
|
|
|
opal_process_info.nodename,
|
2009-12-15 17:25:07 +03:00
|
|
|
ibv_get_device_name(openib_btl->device->ib_dev),
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_id,
|
|
|
|
(openib_btl->device->ib_dev_attr).vendor_part_id,
|
|
|
|
mca_btl_openib_component.receive_queues,
|
2014-10-04 01:19:48 +04:00
|
|
|
opal_get_proc_hostname(endpoint->endpoint_proc->proc_opal),
|
2009-12-15 17:25:07 +03:00
|
|
|
endpoint->rem_info.rem_vendor_id,
|
|
|
|
endpoint->rem_info.rem_vendor_part_id,
|
|
|
|
values.receive_queues);
|
|
|
|
|
2014-11-05 00:25:02 +03:00
|
|
|
return OPAL_ERROR;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
2014-11-05 00:25:02 +03:00
|
|
|
}
|
|
|
|
break;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
|
|
|
|
2014-01-06 23:51:30 +04:00
|
|
|
static int prepare_device_for_use (mca_btl_openib_device_t *device)
|
|
|
|
{
|
|
|
|
mca_btl_openib_frag_init_data_t *init_data;
|
|
|
|
int rc, length;
|
|
|
|
|
|
|
|
if (device->ready_for_use) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* For each btl module that we made - find every
|
|
|
|
base device that doesn't have device->qps setup on it yet (remember
|
|
|
|
that some modules may share the same device, so when going through
|
|
|
|
to loop, we may hit a device that was already setup earlier in
|
|
|
|
the loop).
|
|
|
|
|
|
|
|
We may to call for prepare_device_for_use() only after adding the btl
|
|
|
|
to mca_btl_openib_component.openib_btls, since the prepare_device_for_use
|
|
|
|
adds device to async thread that require access to
|
|
|
|
mca_btl_openib_component.openib_btls.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Setup the device qps info */
|
|
|
|
device->qps = (mca_btl_openib_device_qp_t*)
|
|
|
|
calloc(mca_btl_openib_component.num_qps,
|
|
|
|
sizeof(mca_btl_openib_device_qp_t));
|
|
|
|
if (NULL == device->qps) {
|
|
|
|
BTL_ERROR(("Failed malloc: %s:%d", __FILE__, __LINE__));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
for (int qp_index = 0 ; qp_index < mca_btl_openib_component.num_qps ; qp_index++) {
|
|
|
|
OBJ_CONSTRUCT(&device->qps[qp_index].send_free, ompi_free_list_t);
|
|
|
|
OBJ_CONSTRUCT(&device->qps[qp_index].recv_free, ompi_free_list_t);
|
|
|
|
}
|
|
|
|
|
|
|
|
if(mca_btl_openib_component.use_async_event_thread) {
|
2015-01-06 18:47:07 +03:00
|
|
|
mca_btl_openib_async_cmd_t async_command;
|
2014-01-06 23:51:30 +04:00
|
|
|
|
|
|
|
/* start the async even thread if it is not already started */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (start_async_event_thread() != OPAL_SUCCESS)
|
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
|
|
|
|
device->got_fatal_event = false;
|
|
|
|
device->got_port_event = false;
|
2015-01-06 18:47:07 +03:00
|
|
|
async_command.a_cmd = OPENIB_ASYNC_CMD_FD_ADD;
|
|
|
|
async_command.fd = device->ib_dev_context->async_fd;
|
2014-01-06 23:51:30 +04:00
|
|
|
if (write(mca_btl_openib_component.async_pipe[1],
|
|
|
|
&async_command, sizeof(mca_btl_openib_async_cmd_t))<0){
|
|
|
|
BTL_ERROR(("Failed to write to pipe [%d]",errno));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
/* wait for ok from thread */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
2014-01-06 23:51:30 +04:00
|
|
|
btl_openib_async_command_done(device->ib_dev_context->async_fd)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_ENABLE_PROGRESS_THREADS == 1
|
2014-01-06 23:51:30 +04:00
|
|
|
/* Prepare data for thread, but not starting it */
|
|
|
|
OBJ_CONSTRUCT(&device->thread, opal_thread_t);
|
|
|
|
device->thread.t_run = mca_btl_openib_progress_thread;
|
|
|
|
device->thread.t_arg = device;
|
|
|
|
device->progress = false;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if HAVE_XRC
|
|
|
|
/* if user configured to run with XRC qp and the device doesn't
|
|
|
|
* support it - we should ignore this device. Maybe we have another
|
|
|
|
* one that has XRC support
|
|
|
|
*/
|
|
|
|
if (!(device->ib_dev_attr.device_cap_flags & IBV_DEVICE_XRC) &&
|
|
|
|
MCA_BTL_XRC_ENABLED) {
|
|
|
|
opal_show_help("help-mpi-btl-openib.txt",
|
|
|
|
"XRC on device without XRC support", true,
|
|
|
|
mca_btl_openib_component.num_xrc_qps,
|
|
|
|
ibv_get_device_name(device->ib_dev),
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.nodename);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (MCA_BTL_XRC_ENABLED) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != mca_btl_openib_open_xrc_domain(device)) {
|
2014-01-06 23:51:30 +04:00
|
|
|
BTL_ERROR(("XRC Internal error. Failed to open xrc domain"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
device->endpoints = OBJ_NEW(opal_pointer_array_t);
|
|
|
|
opal_pointer_array_init(device->endpoints, 10, INT_MAX, 10);
|
|
|
|
opal_pointer_array_add(&mca_btl_openib_component.devices, device);
|
|
|
|
if (mca_btl_openib_component.max_eager_rdma > 0 &&
|
|
|
|
device->use_eager_rdma) {
|
|
|
|
device->eager_rdma_buffers =
|
|
|
|
(mca_btl_base_endpoint_t **) calloc(mca_btl_openib_component.max_eager_rdma * device->btls,
|
|
|
|
sizeof(mca_btl_openib_endpoint_t*));
|
|
|
|
if(NULL == device->eager_rdma_buffers) {
|
|
|
|
BTL_ERROR(("Memory allocation fails"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
init_data = (mca_btl_openib_frag_init_data_t *) malloc(sizeof(mca_btl_openib_frag_init_data_t));
|
|
|
|
if (NULL == init_data) {
|
|
|
|
if (mca_btl_openib_component.max_eager_rdma > 0 &&
|
|
|
|
device->use_eager_rdma) {
|
|
|
|
/* cleanup */
|
|
|
|
free (device->eager_rdma_buffers);
|
|
|
|
device->eager_rdma_buffers = NULL;
|
|
|
|
}
|
|
|
|
BTL_ERROR(("Memory allocation fails"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
length = sizeof(mca_btl_openib_header_t) +
|
|
|
|
sizeof(mca_btl_openib_footer_t) +
|
|
|
|
sizeof(mca_btl_openib_eager_rdma_header_t);
|
|
|
|
|
|
|
|
init_data->order = MCA_BTL_NO_ORDER;
|
|
|
|
init_data->list = &device->send_free_control;
|
|
|
|
|
|
|
|
rc = ompi_free_list_init_ex_new(&device->send_free_control,
|
|
|
|
sizeof(mca_btl_openib_send_control_frag_t), opal_cache_line_size,
|
|
|
|
OBJ_CLASS(mca_btl_openib_send_control_frag_t), length,
|
|
|
|
mca_btl_openib_component.buffer_alignment,
|
|
|
|
mca_btl_openib_component.ib_free_list_num, -1,
|
|
|
|
mca_btl_openib_component.ib_free_list_inc,
|
|
|
|
device->mpool, mca_btl_openib_frag_init,
|
|
|
|
init_data);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2014-01-06 23:51:30 +04:00
|
|
|
/* If we're "out of memory", this usually means that we ran
|
|
|
|
out of registered memory, so show that error message */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_ERR_OUT_OF_RESOURCE == rc ||
|
|
|
|
OPAL_ERR_TEMP_OUT_OF_RESOURCE == rc) {
|
2014-01-06 23:51:30 +04:00
|
|
|
errno = ENOMEM;
|
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__,
|
|
|
|
"ompi_free_list_init_ex_new",
|
|
|
|
ibv_get_device_name(device->ib_dev));
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* setup all the qps */
|
|
|
|
for (int qp = 0 ; qp < mca_btl_openib_component.num_qps ; qp++) {
|
|
|
|
init_data = (mca_btl_openib_frag_init_data_t *) malloc(sizeof(mca_btl_openib_frag_init_data_t));
|
|
|
|
if (NULL == init_data) {
|
|
|
|
BTL_ERROR(("Memory allocation fails"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize pool of send fragments */
|
|
|
|
length = sizeof(mca_btl_openib_header_t) +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t) +
|
|
|
|
sizeof(mca_btl_openib_control_header_t) +
|
|
|
|
sizeof(mca_btl_openib_footer_t) +
|
|
|
|
mca_btl_openib_component.qp_infos[qp].size;
|
|
|
|
|
|
|
|
init_data->order = qp;
|
|
|
|
init_data->list = &device->qps[qp].send_free;
|
|
|
|
|
|
|
|
rc = ompi_free_list_init_ex_new(init_data->list,
|
|
|
|
sizeof(mca_btl_openib_send_frag_t), opal_cache_line_size,
|
|
|
|
OBJ_CLASS(mca_btl_openib_send_frag_t), length,
|
|
|
|
mca_btl_openib_component.buffer_alignment,
|
|
|
|
mca_btl_openib_component.ib_free_list_num,
|
|
|
|
mca_btl_openib_component.ib_free_list_max,
|
|
|
|
mca_btl_openib_component.ib_free_list_inc,
|
|
|
|
device->mpool, mca_btl_openib_frag_init,
|
|
|
|
init_data);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2014-01-06 23:51:30 +04:00
|
|
|
/* If we're "out of memory", this usually means that we
|
|
|
|
ran out of registered memory, so show that error
|
|
|
|
message */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_ERR_OUT_OF_RESOURCE == rc ||
|
|
|
|
OPAL_ERR_TEMP_OUT_OF_RESOURCE == rc) {
|
2014-01-06 23:51:30 +04:00
|
|
|
errno = ENOMEM;
|
|
|
|
mca_btl_openib_show_init_error(__FILE__, __LINE__,
|
|
|
|
"ompi_free_list_init_ex_new",
|
|
|
|
ibv_get_device_name(device->ib_dev));
|
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
init_data = (mca_btl_openib_frag_init_data_t *) malloc(sizeof(mca_btl_openib_frag_init_data_t));
|
|
|
|
length = sizeof(mca_btl_openib_header_t) +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t) +
|
|
|
|
sizeof(mca_btl_openib_control_header_t) +
|
|
|
|
sizeof(mca_btl_openib_footer_t) +
|
|
|
|
mca_btl_openib_component.qp_infos[qp].size;
|
|
|
|
|
|
|
|
init_data->order = qp;
|
|
|
|
init_data->list = &device->qps[qp].recv_free;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(OPAL_SUCCESS != ompi_free_list_init_ex_new(init_data->list,
|
2014-01-06 23:51:30 +04:00
|
|
|
sizeof(mca_btl_openib_recv_frag_t), opal_cache_line_size,
|
|
|
|
OBJ_CLASS(mca_btl_openib_recv_frag_t),
|
|
|
|
length, mca_btl_openib_component.buffer_alignment,
|
|
|
|
mca_btl_openib_component.ib_free_list_num,
|
|
|
|
mca_btl_openib_component.ib_free_list_max,
|
|
|
|
mca_btl_openib_component.ib_free_list_inc,
|
|
|
|
device->mpool, mca_btl_openib_frag_init,
|
|
|
|
init_data)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
device->ready_for_use = true;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2014-01-06 23:51:30 +04:00
|
|
|
}
|
2012-07-19 21:52:21 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
/*
|
|
|
|
* add a proc to this btl module
|
2005-07-20 19:17:18 +04:00
|
|
|
* creates an endpoint that is setup on the
|
|
|
|
* first send to the endpoint
|
2008-01-21 15:11:18 +03:00
|
|
|
*/
|
2005-07-01 01:28:35 +04:00
|
|
|
int mca_btl_openib_add_procs(
|
2008-01-21 15:11:18 +03:00
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t **procs,
|
2008-01-21 15:11:18 +03:00
|
|
|
struct mca_btl_base_endpoint_t** peers,
|
2009-03-04 01:25:13 +03:00
|
|
|
opal_bitmap_t* reachable)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2005-07-12 17:38:54 +04:00
|
|
|
mca_btl_openib_module_t* openib_btl = (mca_btl_openib_module_t*)btl;
|
2012-07-19 21:52:21 +04:00
|
|
|
int i,j, rc, local_procs;
|
2007-01-13 01:42:20 +03:00
|
|
|
int rem_subnet_id_port_cnt;
|
|
|
|
int lcl_subnet_id_port_cnt = 0;
|
2007-01-08 20:20:09 +03:00
|
|
|
int btl_rank = 0;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
mca_btl_base_endpoint_t* endpoint;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_module_t *local_cpc;
|
|
|
|
opal_btl_openib_connect_base_module_data_t *remote_cpc_data;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
for(j=0; j < mca_btl_openib_component.ib_num_btls; j++){
|
2007-05-09 01:47:21 +04:00
|
|
|
if(mca_btl_openib_component.openib_btls[j]->port_info.subnet_id
|
2008-01-21 15:11:18 +03:00
|
|
|
== openib_btl->port_info.subnet_id) {
|
|
|
|
if(openib_btl == mca_btl_openib_component.openib_btls[j]) {
|
2007-06-13 15:15:58 +04:00
|
|
|
btl_rank = lcl_subnet_id_port_cnt;
|
2007-01-13 01:42:20 +03:00
|
|
|
}
|
2007-06-14 14:27:11 +04:00
|
|
|
lcl_subnet_id_port_cnt++;
|
2007-01-13 01:42:20 +03:00
|
|
|
}
|
|
|
|
}
|
2007-06-14 14:27:11 +04:00
|
|
|
|
2007-11-28 10:18:59 +03:00
|
|
|
#if HAVE_XRC
|
|
|
|
if(MCA_BTL_XRC_ENABLED &&
|
|
|
|
NULL == mca_btl_openib_component.ib_addr_table.ht_table) {
|
|
|
|
if(OPAL_SUCCESS != opal_hash_table_init(
|
|
|
|
&mca_btl_openib_component.ib_addr_table, nprocs)) {
|
2008-05-02 15:52:33 +04:00
|
|
|
BTL_ERROR(("XRC internal error. Failed to allocate ib_table"));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2007-11-28 10:18:59 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-01-06 23:51:30 +04:00
|
|
|
rc = prepare_device_for_use (openib_btl->device);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2014-01-06 23:51:30 +04:00
|
|
|
BTL_ERROR(("could not prepare openib device for use"));
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2012-07-19 21:52:21 +04:00
|
|
|
for (i = 0, local_procs = 0 ; i < (int) nprocs; i++) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t* proc = procs[i];
|
2005-07-01 01:28:35 +04:00
|
|
|
mca_btl_openib_proc_t* ib_proc;
|
2008-05-02 15:52:33 +04:00
|
|
|
int remote_matching_port;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-06-09 18:53:58 +04:00
|
|
|
opal_output(-1, "add procs: adding proc %d", i);
|
2008-06-21 02:08:00 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_PROC_ON_LOCAL_NODE(proc->proc_flags)) {
|
2012-07-19 21:52:21 +04:00
|
|
|
local_procs ++;
|
|
|
|
}
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
/* OOB, XOOB, and RDMACM do not support SELF comunication, so
|
|
|
|
* mark the prco as unreachable by openib btl */
|
|
|
|
if (0 == opal_compare_proc(OPAL_PROC_MY_NAME, proc->proc_name)) {
|
|
|
|
continue;
|
|
|
|
}
|
2008-06-25 18:50:59 +04:00
|
|
|
#if defined(HAVE_STRUCT_IBV_DEVICE_TRANSPORT_TYPE)
|
2008-06-21 02:08:00 +04:00
|
|
|
/* Most current iWARP adapters (June 2008) cannot handle
|
|
|
|
talking to other processes on the same host (!) -- so mark
|
|
|
|
them as unreachable (need to use sm). So for the moment,
|
|
|
|
we'll just mark any local peer on an iWARP NIC as
|
|
|
|
unreachable. See trac ticket #1352. */
|
2008-07-23 04:28:59 +04:00
|
|
|
if (IBV_TRANSPORT_IWARP == openib_btl->device->ib_dev->transport_type &&
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_PROC_ON_LOCAL_NODE(proc->proc_flags)) {
|
2008-06-21 02:08:00 +04:00
|
|
|
continue;
|
|
|
|
}
|
2008-06-25 18:50:59 +04:00
|
|
|
#endif
|
2008-06-21 02:08:00 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(NULL == (ib_proc = mca_btl_openib_proc_create(proc))) {
|
2014-09-10 21:02:16 +04:00
|
|
|
/* if we don't have connection info for this process, it's
|
|
|
|
* okay because some other method might be able to reach it,
|
|
|
|
* so just mark it as unreachable by us */
|
|
|
|
continue;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/* check if the remote proc has any ports that:
|
|
|
|
- on the same subnet as the local proc, and
|
|
|
|
- on that subnet, has a CPC in common with the local proc
|
|
|
|
*/
|
|
|
|
remote_matching_port = -1;
|
|
|
|
rem_subnet_id_port_cnt = 0;
|
|
|
|
BTL_VERBOSE(("got %d port_infos ", ib_proc->proc_port_count));
|
|
|
|
for (j = 0; j < (int) ib_proc->proc_port_count; j++){
|
2008-06-27 00:23:56 +04:00
|
|
|
BTL_VERBOSE(("got a subnet %016" PRIx64,
|
2008-05-02 15:52:33 +04:00
|
|
|
ib_proc->proc_ports[j].pm_port_info.subnet_id));
|
|
|
|
if (ib_proc->proc_ports[j].pm_port_info.subnet_id ==
|
|
|
|
openib_btl->port_info.subnet_id) {
|
|
|
|
BTL_VERBOSE(("Got a matching subnet!"));
|
|
|
|
if (rem_subnet_id_port_cnt == btl_rank) {
|
|
|
|
remote_matching_port = j;
|
|
|
|
}
|
|
|
|
rem_subnet_id_port_cnt++;
|
2007-01-04 01:35:41 +03:00
|
|
|
}
|
|
|
|
}
|
2008-01-15 02:22:03 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
if (0 == rem_subnet_id_port_cnt) {
|
|
|
|
/* no use trying to communicate with this endpoint */
|
|
|
|
BTL_VERBOSE(("No matching subnet id/CPC was found, moving on.. "));
|
|
|
|
continue;
|
2008-01-15 02:22:03 +03:00
|
|
|
}
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/* If this process has multiple ports on a single subnet ID,
|
|
|
|
and the report proc also has multiple ports on this same
|
|
|
|
subnet ID, the default connection pattern is:
|
|
|
|
|
|
|
|
LOCAL REMOTE PEER
|
|
|
|
1st port on subnet X <--> 1st port on subnet X
|
|
|
|
2nd port on subnet X <--> 2nd port on subnet X
|
|
|
|
3nd port on subnet X <--> 3nd port on subnet X
|
|
|
|
...etc.
|
|
|
|
|
|
|
|
Note that the port numbers may not be contiguous, and they
|
|
|
|
may not be the same on either side. Hence the "1st", "2nd",
|
|
|
|
"3rd, etc. notation, above.
|
|
|
|
|
|
|
|
Hence, if the local "rank" of this module's port on the
|
|
|
|
subnet ID is greater than the total number of ports on the
|
|
|
|
peer on this same subnet, then we have no match. So skip
|
|
|
|
this connection. */
|
|
|
|
if (rem_subnet_id_port_cnt < lcl_subnet_id_port_cnt &&
|
|
|
|
btl_rank >= rem_subnet_id_port_cnt) {
|
|
|
|
BTL_VERBOSE(("Not enough remote ports on this subnet id, moving on.. "));
|
2007-01-04 01:35:41 +03:00
|
|
|
continue;
|
|
|
|
}
|
2007-04-27 01:03:38 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/* Now that we have verified that we're on the same subnet and
|
|
|
|
the remote peer has enough ports, see if that specific port
|
|
|
|
on the peer has a matching CPC. */
|
|
|
|
assert(btl_rank <= ib_proc->proc_port_count);
|
|
|
|
assert(remote_matching_port != -1);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
|
|
|
opal_btl_openib_connect_base_find_match(openib_btl,
|
2008-05-02 15:52:33 +04:00
|
|
|
&(ib_proc->proc_ports[remote_matching_port]),
|
|
|
|
&local_cpc,
|
|
|
|
&remote_cpc_data)) {
|
2007-01-04 01:35:41 +03:00
|
|
|
continue;
|
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
|
2005-07-04 02:45:48 +04:00
|
|
|
OPAL_THREAD_LOCK(&ib_proc->proc_lock);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-04-27 01:03:38 +04:00
|
|
|
/* The btl_proc datastructure is shared by all IB BTL
|
2008-01-21 15:11:18 +03:00
|
|
|
* instances that are trying to reach this destination.
|
2005-07-01 01:28:35 +04:00
|
|
|
* Cache the peer instance on the btl_proc.
|
|
|
|
*/
|
2007-01-04 01:35:41 +03:00
|
|
|
endpoint = OBJ_NEW(mca_btl_openib_endpoint_t);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
assert(((opal_object_t*)endpoint)->obj_reference_count == 1);
|
2007-01-04 01:35:41 +03:00
|
|
|
if(NULL == endpoint) {
|
2005-08-16 17:22:08 +04:00
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:18:59 +03:00
|
|
|
#if HAVE_XRC
|
|
|
|
if (MCA_BTL_XRC_ENABLED) {
|
2007-11-28 12:57:48 +03:00
|
|
|
int rem_port_cnt = 0;
|
2007-12-03 12:49:53 +03:00
|
|
|
for(j = 0; j < (int) ib_proc->proc_port_count; j++) {
|
2008-05-02 15:52:33 +04:00
|
|
|
if(ib_proc->proc_ports[j].pm_port_info.subnet_id ==
|
2007-11-28 12:57:48 +03:00
|
|
|
openib_btl->port_info.subnet_id) {
|
2007-12-03 12:49:53 +03:00
|
|
|
if (rem_port_cnt == btl_rank)
|
|
|
|
break;
|
|
|
|
else
|
|
|
|
rem_port_cnt ++;
|
2007-11-28 12:57:48 +03:00
|
|
|
}
|
|
|
|
}
|
2007-12-03 12:49:53 +03:00
|
|
|
|
2007-11-28 12:57:48 +03:00
|
|
|
assert(rem_port_cnt == btl_rank);
|
2008-01-16 00:14:48 +03:00
|
|
|
/* Push the subnet/lid/jobid to xrc hash */
|
2007-11-28 10:18:59 +03:00
|
|
|
rc = mca_btl_openib_ib_address_add_new(
|
2008-05-02 15:52:33 +04:00
|
|
|
ib_proc->proc_ports[j].pm_port_info.lid,
|
|
|
|
ib_proc->proc_ports[j].pm_port_info.subnet_id,
|
2014-11-12 04:00:42 +03:00
|
|
|
proc->proc_name.jobid, endpoint);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc ) {
|
2007-11-28 10:18:59 +03:00
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2007-11-28 10:18:59 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2011-07-04 18:00:41 +04:00
|
|
|
mca_btl_openib_endpoint_init(openib_btl, endpoint,
|
|
|
|
local_cpc,
|
2008-05-02 15:52:33 +04:00
|
|
|
&(ib_proc->proc_ports[remote_matching_port]),
|
|
|
|
remote_cpc_data);
|
|
|
|
|
2007-01-04 01:35:41 +03:00
|
|
|
rc = mca_btl_openib_proc_insert(ib_proc, endpoint);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2007-01-04 01:35:41 +03:00
|
|
|
OBJ_RELEASE(endpoint);
|
2005-08-14 23:03:09 +04:00
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
2005-07-01 01:28:35 +04:00
|
|
|
continue;
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(OPAL_SUCCESS != mca_btl_openib_tune_endpoint(openib_btl, endpoint)) {
|
2009-12-15 17:25:07 +03:00
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2009-12-15 17:25:07 +03:00
|
|
|
}
|
|
|
|
|
2008-07-23 04:28:59 +04:00
|
|
|
endpoint->index = opal_pointer_array_add(openib_btl->device->endpoints, (void*)endpoint);
|
2007-12-21 09:02:00 +03:00
|
|
|
if( 0 > endpoint->index ) {
|
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
|
|
|
continue;
|
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/* Tell the selected CPC that it won. NOTE: This call is
|
|
|
|
outside of / separate from mca_btl_openib_endpoint_init()
|
|
|
|
because this function likely needs the endpoint->index. */
|
|
|
|
if (NULL != local_cpc->cbm_endpoint_init) {
|
|
|
|
rc = local_cpc->cbm_endpoint_init(endpoint);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2008-05-02 15:52:33 +04:00
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-03-04 01:25:13 +03:00
|
|
|
opal_bitmap_set_bit(reachable, i);
|
2005-08-14 23:03:09 +04:00
|
|
|
OPAL_THREAD_UNLOCK(&ib_proc->proc_lock);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-01-04 01:35:41 +03:00
|
|
|
peers[i] = endpoint;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
2007-01-04 01:35:41 +03:00
|
|
|
|
2012-07-19 21:52:21 +04:00
|
|
|
openib_btl->local_procs += local_procs;
|
2014-12-23 23:45:56 +03:00
|
|
|
openib_btl->device->mem_reg_max /= openib_btl->local_procs;
|
2012-07-19 21:52:21 +04:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
return mca_btl_openib_size_queues(openib_btl, nprocs);
|
2006-07-31 21:24:39 +04:00
|
|
|
}
|
2006-07-30 04:58:40 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
/*
|
|
|
|
* delete the proc as reachable from this btl module
|
2005-07-20 19:17:18 +04:00
|
|
|
*/
|
2008-01-21 15:11:18 +03:00
|
|
|
int mca_btl_openib_del_procs(struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t **procs,
|
2005-07-01 01:28:35 +04:00
|
|
|
struct mca_btl_base_endpoint_t ** peers)
|
|
|
|
{
|
2014-06-02 06:23:52 +04:00
|
|
|
int i, ep_index;
|
2007-05-09 01:47:21 +04:00
|
|
|
mca_btl_openib_module_t* openib_btl = (mca_btl_openib_module_t*) btl;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
mca_btl_openib_endpoint_t* endpoint;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-05-09 01:47:21 +04:00
|
|
|
for (i=0 ; i < (int) nprocs ; i++) {
|
|
|
|
mca_btl_base_endpoint_t* del_endpoint = peers[i];
|
|
|
|
for(ep_index=0;
|
2008-07-23 04:28:59 +04:00
|
|
|
ep_index < opal_pointer_array_get_size(openib_btl->device->endpoints);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
ep_index++) {
|
2010-07-12 20:17:56 +04:00
|
|
|
endpoint = (mca_btl_openib_endpoint_t *)
|
2008-07-23 04:28:59 +04:00
|
|
|
opal_pointer_array_get_item(openib_btl->device->endpoints,
|
2007-08-20 16:28:25 +04:00
|
|
|
ep_index);
|
|
|
|
if(!endpoint || endpoint->endpoint_btl != openib_btl) {
|
2007-05-09 01:47:21 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (endpoint == del_endpoint) {
|
2014-06-02 06:23:52 +04:00
|
|
|
int j;
|
2008-05-02 15:52:33 +04:00
|
|
|
BTL_VERBOSE(("in del_procs %d, setting another endpoint to null",
|
2007-10-15 21:53:02 +04:00
|
|
|
ep_index));
|
2014-06-02 06:23:52 +04:00
|
|
|
/* remove the endpoint from eager_rdma_buffers */
|
|
|
|
for (j=0; j<openib_btl->device->eager_rdma_buffers_count; j++) {
|
|
|
|
if (openib_btl->device->eager_rdma_buffers[j] == endpoint) {
|
|
|
|
/* should it be obj_reference_count == 2 ? */
|
|
|
|
assert(((opal_object_t*)endpoint)->obj_reference_count > 1);
|
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
openib_btl->device->eager_rdma_buffers[j] = NULL;
|
|
|
|
/* can we simply break and leave the for loop ? */
|
|
|
|
}
|
|
|
|
}
|
2008-07-23 04:28:59 +04:00
|
|
|
opal_pointer_array_set_item(openib_btl->device->endpoints,
|
2007-08-20 16:28:25 +04:00
|
|
|
ep_index, NULL);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
assert(((opal_object_t*)endpoint)->obj_reference_count == 1);
|
2008-05-02 15:52:33 +04:00
|
|
|
mca_btl_openib_proc_remove(procs[i], endpoint);
|
2007-05-09 01:47:21 +04:00
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
/*
|
2006-08-17 00:21:38 +04:00
|
|
|
*Register callback function for error handling..
|
2008-01-21 15:11:18 +03:00
|
|
|
*/
|
2006-08-17 00:21:38 +04:00
|
|
|
int mca_btl_openib_register_error_cb(
|
2008-01-21 15:11:18 +03:00
|
|
|
struct mca_btl_base_module_t* btl,
|
2006-08-17 00:21:38 +04:00
|
|
|
mca_btl_base_module_error_cb_fn_t cbfunc)
|
|
|
|
{
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
mca_btl_openib_module_t* openib_btl = (mca_btl_openib_module_t*) btl;
|
2006-08-17 00:21:38 +04:00
|
|
|
openib_btl->error_cb = cbfunc; /* stash for later */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2006-08-17 00:21:38 +04:00
|
|
|
}
|
|
|
|
|
2007-12-09 17:02:32 +03:00
|
|
|
static inline mca_btl_base_descriptor_t *
|
2007-12-09 17:08:55 +03:00
|
|
|
ib_frag_alloc(mca_btl_openib_module_t *btl, size_t size, uint8_t order,
|
|
|
|
uint32_t flags)
|
2007-12-09 17:02:32 +03:00
|
|
|
{
|
2013-07-04 12:34:37 +04:00
|
|
|
int qp;
|
2007-12-09 17:02:32 +03:00
|
|
|
ompi_free_list_item_t* item = NULL;
|
|
|
|
|
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++) {
|
|
|
|
if(mca_btl_openib_component.qp_infos[qp].size >= size) {
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_GET_MT(&btl->device->qps[qp].send_free, item);
|
2007-12-09 17:02:32 +03:00
|
|
|
if(item)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if(NULL == item)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* not all upper layer users set this */
|
2012-06-21 21:09:12 +04:00
|
|
|
to_base_frag(item)->segment.base.seg_len = size;
|
2007-12-09 17:02:32 +03:00
|
|
|
to_base_frag(item)->base.order = order;
|
2007-12-09 17:08:55 +03:00
|
|
|
to_base_frag(item)->base.des_flags = flags;
|
2007-12-09 17:02:32 +03:00
|
|
|
|
|
|
|
assert(to_send_frag(item)->qp_idx <= order);
|
|
|
|
return &to_base_frag(item)->base;
|
|
|
|
}
|
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
/* check if pending fragment has enough space for coalescing */
|
|
|
|
static mca_btl_openib_send_frag_t *check_coalescing(opal_list_t *frag_list,
|
2014-11-05 00:26:17 +03:00
|
|
|
opal_mutex_t *lock, struct mca_btl_base_endpoint_t *ep, size_t size,
|
|
|
|
mca_btl_openib_coalesced_frag_t **cfrag)
|
2007-12-09 17:05:13 +03:00
|
|
|
{
|
|
|
|
mca_btl_openib_send_frag_t *frag = NULL;
|
|
|
|
|
2014-11-05 00:26:17 +03:00
|
|
|
if (opal_list_is_empty(frag_list))
|
2007-12-09 17:05:13 +03:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
OPAL_THREAD_LOCK(lock);
|
2014-11-05 00:26:17 +03:00
|
|
|
if (!opal_list_is_empty(frag_list)) {
|
2007-12-09 17:05:13 +03:00
|
|
|
int qp;
|
|
|
|
size_t total_length;
|
|
|
|
opal_list_item_t *i = opal_list_get_first(frag_list);
|
|
|
|
frag = to_send_frag(i);
|
|
|
|
if(to_com_frag(frag)->endpoint != ep ||
|
|
|
|
MCA_BTL_OPENIB_FRAG_CONTROL == openib_frag_type(frag)) {
|
|
|
|
OPAL_THREAD_UNLOCK(lock);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
total_length = size + frag->coalesced_length +
|
2012-06-21 21:09:12 +04:00
|
|
|
to_base_frag(frag)->segment.base.seg_len +
|
2007-12-09 17:05:13 +03:00
|
|
|
sizeof(mca_btl_openib_header_coalesced_t);
|
|
|
|
|
|
|
|
qp = to_base_frag(frag)->base.order;
|
|
|
|
|
2014-11-05 00:26:17 +03:00
|
|
|
if(total_length <= mca_btl_openib_component.qp_infos[qp].size) {
|
|
|
|
/* make sure we can allocate a coalescing frag before returning success */
|
|
|
|
*cfrag = alloc_coalesced_frag();
|
|
|
|
if (OPAL_LIKELY(NULL != cfrag)) {
|
|
|
|
(*cfrag)->send_frag = frag;
|
|
|
|
(*cfrag)->sent = false;
|
|
|
|
|
|
|
|
opal_list_remove_first(frag_list);
|
|
|
|
} else {
|
|
|
|
frag = NULL;
|
|
|
|
}
|
|
|
|
} else {
|
2007-12-09 17:05:13 +03:00
|
|
|
frag = NULL;
|
2014-11-05 00:26:17 +03:00
|
|
|
}
|
2007-12-09 17:05:13 +03:00
|
|
|
}
|
|
|
|
OPAL_THREAD_UNLOCK(lock);
|
|
|
|
|
|
|
|
return frag;
|
|
|
|
}
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
/**
|
|
|
|
* Allocate a segment.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param size (IN) Request segment size.
|
2008-01-21 15:11:18 +03:00
|
|
|
* @param size (IN) Size of segment to allocate
|
|
|
|
*
|
|
|
|
* When allocating a segment we pull a pre-alllocated segment
|
2005-07-20 19:17:18 +04:00
|
|
|
* from one of two free lists, an eager list and a max list
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
|
|
|
mca_btl_base_descriptor_t* mca_btl_openib_alloc(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
2007-12-09 17:05:13 +03:00
|
|
|
struct mca_btl_base_endpoint_t* ep,
|
2007-05-24 23:51:26 +04:00
|
|
|
uint8_t order,
|
2007-12-09 17:08:01 +03:00
|
|
|
size_t size,
|
|
|
|
uint32_t flags)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2007-12-09 17:05:13 +03:00
|
|
|
mca_btl_openib_module_t *obtl = (mca_btl_openib_module_t*)btl;
|
|
|
|
int qp = frag_size_to_order(obtl, size);
|
|
|
|
mca_btl_openib_send_frag_t *sfrag = NULL;
|
2014-11-05 00:26:17 +03:00
|
|
|
mca_btl_openib_coalesced_frag_t *cfrag = NULL;
|
2007-12-09 17:05:13 +03:00
|
|
|
|
|
|
|
assert(qp != MCA_BTL_NO_ORDER);
|
|
|
|
|
2008-02-18 20:39:30 +03:00
|
|
|
if(mca_btl_openib_component.use_message_coalescing &&
|
2009-08-06 02:23:26 +04:00
|
|
|
(flags & MCA_BTL_DES_FLAGS_BTL_OWNERSHIP)) {
|
2007-12-09 17:08:55 +03:00
|
|
|
int prio = !(flags & MCA_BTL_DES_FLAGS_PRIORITY);
|
2012-03-01 21:29:40 +04:00
|
|
|
|
2009-01-07 17:10:58 +03:00
|
|
|
sfrag = check_coalescing(&ep->qps[qp].no_wqe_pending_frags[prio],
|
2014-11-05 00:26:17 +03:00
|
|
|
&ep->endpoint_lock, ep, size, &cfrag);
|
2007-12-09 17:05:13 +03:00
|
|
|
|
2014-11-05 00:26:17 +03:00
|
|
|
if (NULL == sfrag) {
|
2007-12-09 17:05:13 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
2009-01-07 17:41:20 +03:00
|
|
|
sfrag = check_coalescing(&ep->qps[qp].no_credits_pending_frags[prio],
|
2014-11-05 00:26:17 +03:00
|
|
|
&ep->endpoint_lock, ep, size, &cfrag);
|
2007-12-09 17:05:13 +03:00
|
|
|
} else {
|
|
|
|
sfrag = check_coalescing(
|
2007-12-09 17:08:55 +03:00
|
|
|
&obtl->qps[qp].u.srq_qp.pending_frags[prio],
|
2014-11-05 00:26:17 +03:00
|
|
|
&obtl->ib_lock, ep, size, &cfrag);
|
2007-12-09 17:05:13 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-11-05 00:26:17 +03:00
|
|
|
if (NULL == sfrag) {
|
2007-12-09 17:08:55 +03:00
|
|
|
return ib_frag_alloc((mca_btl_openib_module_t*)btl, size, order, flags);
|
2014-11-05 00:26:17 +03:00
|
|
|
}
|
2007-12-09 17:05:13 +03:00
|
|
|
|
|
|
|
/* begin coalescing message */
|
|
|
|
|
|
|
|
/* fix up new coalescing header if this is the first coalesced frag */
|
|
|
|
if(sfrag->hdr != sfrag->chdr) {
|
|
|
|
mca_btl_openib_control_header_t *ctrl_hdr;
|
|
|
|
mca_btl_openib_header_coalesced_t *clsc_hdr;
|
|
|
|
uint8_t org_tag;
|
|
|
|
|
|
|
|
org_tag = sfrag->hdr->tag;
|
|
|
|
sfrag->hdr = sfrag->chdr;
|
|
|
|
ctrl_hdr = (mca_btl_openib_control_header_t*)(sfrag->hdr + 1);
|
|
|
|
clsc_hdr = (mca_btl_openib_header_coalesced_t*)(ctrl_hdr + 1);
|
2013-08-30 18:53:59 +04:00
|
|
|
sfrag->hdr->tag = MCA_BTL_TAG_IB;
|
2007-12-09 17:05:13 +03:00
|
|
|
ctrl_hdr->type = MCA_BTL_OPENIB_CONTROL_COALESCED;
|
|
|
|
clsc_hdr->tag = org_tag;
|
2012-06-21 21:09:12 +04:00
|
|
|
clsc_hdr->size = to_base_frag(sfrag)->segment.base.seg_len;
|
|
|
|
clsc_hdr->alloc_size = to_base_frag(sfrag)->segment.base.seg_len;
|
2007-12-09 17:10:25 +03:00
|
|
|
if(ep->nbo)
|
|
|
|
BTL_OPENIB_HEADER_COALESCED_HTON(*clsc_hdr);
|
2007-12-09 17:05:13 +03:00
|
|
|
sfrag->coalesced_length = sizeof(mca_btl_openib_control_header_t) +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t);
|
2008-01-21 15:11:18 +03:00
|
|
|
to_com_frag(sfrag)->sg_entry.addr = (uint64_t)(uintptr_t)sfrag->hdr;
|
2007-12-09 17:05:13 +03:00
|
|
|
}
|
|
|
|
|
2012-03-01 21:29:40 +04:00
|
|
|
cfrag->hdr = (mca_btl_openib_header_coalesced_t*)((unsigned char*)(sfrag->hdr + 1) +
|
|
|
|
sfrag->coalesced_length +
|
2012-06-21 21:09:12 +04:00
|
|
|
to_base_frag(sfrag)->segment.base.seg_len);
|
2012-03-01 21:29:40 +04:00
|
|
|
cfrag->hdr = (mca_btl_openib_header_coalesced_t*)BTL_OPENIB_ALIGN_COALESCE_HDR(cfrag->hdr);
|
2007-12-09 17:05:13 +03:00
|
|
|
cfrag->hdr->alloc_size = size;
|
|
|
|
|
|
|
|
/* point coalesced frag pointer into a data buffer */
|
2012-06-21 21:09:12 +04:00
|
|
|
to_base_frag(cfrag)->segment.base.seg_addr.pval = cfrag->hdr + 1;
|
|
|
|
to_base_frag(cfrag)->segment.base.seg_len = size;
|
2007-12-09 17:05:13 +03:00
|
|
|
|
2014-11-05 00:26:17 +03:00
|
|
|
/* NTH: there is no reason to append the coalesced fragment here. No more
|
|
|
|
* fragments will be added until either send or free has been called on
|
|
|
|
* the coalesced frag. */
|
2007-12-09 17:05:13 +03:00
|
|
|
|
2008-02-19 15:26:45 +03:00
|
|
|
to_base_frag(cfrag)->base.des_flags = flags;
|
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
return &to_base_frag(cfrag)->base;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
/**
|
|
|
|
* Return a segment
|
|
|
|
*
|
|
|
|
* Return the segment to the appropriate
|
|
|
|
* preallocated segment list
|
|
|
|
*/
|
2005-07-01 01:28:35 +04:00
|
|
|
int mca_btl_openib_free(
|
2008-01-21 15:11:18 +03:00
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_descriptor_t* des)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2014-11-20 09:22:43 +03:00
|
|
|
/* is this fragment pointing at user memory? */
|
|
|
|
if(MCA_BTL_OPENIB_FRAG_SEND_USER == openib_frag_type(des) ||
|
|
|
|
MCA_BTL_OPENIB_FRAG_RECV_USER == openib_frag_type(des)) {
|
|
|
|
mca_btl_openib_com_frag_t* frag = to_com_frag(des);
|
|
|
|
|
|
|
|
if(frag->registration != NULL) {
|
|
|
|
btl->btl_mpool->mpool_deregister(btl->btl_mpool,
|
|
|
|
(mca_mpool_base_registration_t*)frag->registration);
|
|
|
|
frag->registration = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
/* reset those field on free so we will not have to do it on alloc */
|
|
|
|
to_base_frag(des)->base.des_flags = 0;
|
2007-12-09 17:05:13 +03:00
|
|
|
switch(openib_frag_type(des)) {
|
|
|
|
case MCA_BTL_OPENIB_FRAG_SEND:
|
|
|
|
to_send_frag(des)->hdr = (mca_btl_openib_header_t*)
|
|
|
|
(((unsigned char*)to_send_frag(des)->chdr) +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t) +
|
|
|
|
sizeof(mca_btl_openib_control_header_t));
|
2007-12-09 17:15:35 +03:00
|
|
|
to_com_frag(des)->sg_entry.addr =
|
|
|
|
(uint64_t)(uintptr_t)to_send_frag(des)->hdr;
|
2007-12-09 17:05:13 +03:00
|
|
|
to_send_frag(des)->coalesced_length = 0;
|
2012-07-18 21:29:37 +04:00
|
|
|
to_base_frag(des)->segment.base.seg_addr.pval =
|
|
|
|
to_send_frag(des)->hdr + 1;
|
2007-12-09 17:05:13 +03:00
|
|
|
assert(!opal_list_get_size(&to_send_frag(des)->coalesced_frags));
|
2014-07-10 20:31:15 +04:00
|
|
|
/* fall through */
|
2014-11-20 09:22:43 +03:00
|
|
|
case MCA_BTL_OPENIB_FRAG_RECV:
|
|
|
|
case MCA_BTL_OPENIB_FRAG_RECV_USER:
|
|
|
|
case MCA_BTL_OPENIB_FRAG_SEND_USER:
|
|
|
|
to_base_frag(des)->base.des_remote = NULL;
|
|
|
|
to_base_frag(des)->base.des_remote_count = 0;
|
|
|
|
break;
|
2007-12-09 17:05:13 +03:00
|
|
|
default:
|
|
|
|
break;
|
2007-11-28 10:11:14 +03:00
|
|
|
}
|
2014-11-05 00:26:17 +03:00
|
|
|
|
|
|
|
if (openib_frag_type(des) == MCA_BTL_OPENIB_FRAG_COALESCED && !to_coalesced_frag(des)->sent) {
|
|
|
|
mca_btl_openib_send_frag_t *sfrag = to_coalesced_frag(des)->send_frag;
|
|
|
|
|
|
|
|
/* the coalesced fragment would have sent the original fragment but that
|
|
|
|
* will not happen so send the fragment now */
|
|
|
|
mca_btl_openib_endpoint_send(to_com_frag(sfrag)->endpoint, sfrag);
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
MCA_BTL_IB_FRAG_RETURN(des);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-01-21 15:11:18 +03:00
|
|
|
* register user buffer or pack
|
|
|
|
* data into pre-registered buffer and return a
|
2005-07-20 19:17:18 +04:00
|
|
|
* descriptor that can be
|
2005-07-01 01:28:35 +04:00
|
|
|
* used for send/put.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param peer (IN) BTL peer addressing
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
|
|
|
* prepare source's behavior depends on the following:
|
|
|
|
* Has a valid memory registration been passed to prepare_src?
|
|
|
|
* if so we attempt to use the pre-registered user-buffer, if the memory registration
|
|
|
|
* is too small (only a portion of the user buffer) then we must reregister the user buffer
|
|
|
|
* Has the user requested the memory to be left pinned?
|
|
|
|
* if so we insert the memory registration into a memory tree for later lookup, we
|
|
|
|
* may also remove a previous registration if a MRU (most recently used) list of
|
2007-04-27 01:03:38 +04:00
|
|
|
* registrations is full, this prevents resources from being exhausted.
|
2008-01-21 15:11:18 +03:00
|
|
|
* Is the requested size larger than the btl's max send size?
|
|
|
|
* if so and we aren't asked to leave the registration pinned, then we register the memory if
|
|
|
|
* the users buffer is contiguous
|
|
|
|
* Otherwise we choose from two free lists of pre-registered memory in which to pack the data into.
|
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
|
|
|
mca_btl_base_descriptor_t* mca_btl_openib_prepare_src(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
2014-11-20 09:22:43 +03:00
|
|
|
mca_mpool_base_registration_t* registration,
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
struct opal_convertor_t* convertor,
|
2007-05-24 23:51:26 +04:00
|
|
|
uint8_t order,
|
2005-07-01 01:28:35 +04:00
|
|
|
size_t reserve,
|
2007-12-09 17:08:01 +03:00
|
|
|
size_t* size,
|
|
|
|
uint32_t flags)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2014-11-20 09:16:16 +03:00
|
|
|
mca_btl_openib_module_t *openib_btl;
|
2014-11-20 09:22:43 +03:00
|
|
|
mca_btl_openib_reg_t *openib_reg;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_com_frag_t *frag = NULL;
|
2006-12-17 15:26:41 +03:00
|
|
|
struct iovec iov;
|
|
|
|
uint32_t iov_count = 1;
|
|
|
|
size_t max_data = *size;
|
2012-07-14 01:19:16 +04:00
|
|
|
void *ptr;
|
2014-11-20 09:16:16 +03:00
|
|
|
int rc;
|
|
|
|
|
|
|
|
openib_btl = (mca_btl_openib_module_t*)btl;
|
2005-07-20 19:17:18 +04:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
#if OPAL_CUDA_GDR_SUPPORT
|
|
|
|
if(opal_convertor_cuda_need_buffers(convertor) == false && 0 == reserve) {
|
|
|
|
#else
|
|
|
|
if(opal_convertor_need_buffers(convertor) == false && 0 == reserve) {
|
|
|
|
#endif /* OPAL_CUDA_GDR_SUPPORT */
|
|
|
|
/* GMS bloody HACK! */
|
|
|
|
if(registration != NULL || max_data > btl->btl_max_send_size) {
|
|
|
|
frag = alloc_send_user_frag();
|
|
|
|
if(NULL == frag) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
iov.iov_len = max_data;
|
|
|
|
iov.iov_base = NULL;
|
|
|
|
|
|
|
|
opal_convertor_pack(convertor, &iov, &iov_count, &max_data);
|
|
|
|
|
|
|
|
*size = max_data;
|
|
|
|
|
|
|
|
if(NULL == registration) {
|
|
|
|
rc = btl->btl_mpool->mpool_register(btl->btl_mpool,
|
|
|
|
iov.iov_base, max_data, 0, ®istration);
|
|
|
|
if(OPAL_SUCCESS != rc || NULL == registration) {
|
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
/* keep track of the registration we did */
|
|
|
|
to_com_frag(frag)->registration =
|
|
|
|
(mca_btl_openib_reg_t*)registration;
|
|
|
|
}
|
|
|
|
openib_reg = (mca_btl_openib_reg_t*)registration;
|
|
|
|
|
|
|
|
frag->sg_entry.length = max_data;
|
|
|
|
frag->sg_entry.lkey = openib_reg->mr->lkey;
|
|
|
|
frag->sg_entry.addr = (uint64_t)(uintptr_t)iov.iov_base;
|
|
|
|
|
|
|
|
to_base_frag(frag)->base.order = order;
|
|
|
|
to_base_frag(frag)->base.des_flags = flags;
|
|
|
|
to_base_frag(frag)->segment.base.seg_len = max_data;
|
|
|
|
to_base_frag(frag)->segment.base.seg_addr.lval = (uint64_t)(uintptr_t) iov.iov_base;
|
|
|
|
to_base_frag(frag)->segment.key = frag->sg_entry.lkey;
|
|
|
|
|
|
|
|
assert(MCA_BTL_NO_ORDER == order);
|
|
|
|
|
|
|
|
BTL_VERBOSE(("frag->sg_entry.lkey = %" PRIu32 " .addr = %" PRIx64,
|
|
|
|
frag->sg_entry.lkey, frag->sg_entry.addr));
|
|
|
|
|
|
|
|
return &to_base_frag(frag)->base;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
assert(MCA_BTL_NO_ORDER == order);
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
if(max_data + reserve > btl->btl_max_send_size) {
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
max_data = btl->btl_max_send_size - reserve;
|
2006-12-17 15:26:41 +03:00
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
if (OPAL_UNLIKELY(0 == reserve)) {
|
|
|
|
frag = (mca_btl_openib_com_frag_t *) ib_frag_alloc(openib_btl, max_data, order, flags);
|
|
|
|
if(NULL == frag)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* NTH: this frag will be ue used for either a get or put so we need to set the lval to be
|
|
|
|
consistent with the usage in get and put. the pval will be restored in mca_btl_openib_free */
|
|
|
|
ptr = to_base_frag(frag)->segment.base.seg_addr.pval;
|
|
|
|
to_base_frag(frag)->segment.base.seg_addr.lval =
|
|
|
|
(uint64_t)(uintptr_t) ptr;
|
|
|
|
} else {
|
|
|
|
frag =
|
|
|
|
(mca_btl_openib_com_frag_t *) mca_btl_openib_alloc(btl, endpoint, order,
|
2012-07-18 21:29:37 +04:00
|
|
|
max_data + reserve, flags);
|
2014-11-20 09:22:43 +03:00
|
|
|
if(NULL == frag)
|
|
|
|
return NULL;
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
ptr = to_base_frag(frag)->segment.base.seg_addr.pval;
|
|
|
|
}
|
2014-10-31 01:43:41 +03:00
|
|
|
|
2006-12-17 15:26:41 +03:00
|
|
|
iov.iov_len = max_data;
|
2012-07-14 01:19:16 +04:00
|
|
|
iov.iov_base = (IOVBASE_TYPE *) ( (unsigned char*) ptr + reserve );
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
rc = opal_convertor_pack(convertor, &iov, &iov_count, &max_data);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT /* CUDA_ASYNC_SEND */
|
2013-01-18 02:34:43 +04:00
|
|
|
/* If the convertor is copying the data asynchronously, then record an event
|
2013-12-04 00:25:58 +04:00
|
|
|
* that will trigger the callback when it completes. Mark descriptor as async.
|
|
|
|
* No need for this in the case we are not sending any GPU data. */
|
|
|
|
if ((convertor->flags & CONVERTOR_CUDA_ASYNC) && (0 != max_data)) {
|
2013-01-18 02:34:43 +04:00
|
|
|
mca_common_cuda_record_dtoh_event("btl_openib", (mca_btl_base_descriptor_t *)frag);
|
|
|
|
to_base_frag(frag)->base.des_flags = flags | MCA_BTL_DES_FLAGS_CUDA_COPY_ASYNC;
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2013-01-18 02:34:43 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
*size = max_data;
|
|
|
|
|
2007-12-11 16:10:52 +03:00
|
|
|
/* not all upper layer users set this */
|
2012-06-21 21:09:12 +04:00
|
|
|
to_base_frag(frag)->segment.base.seg_len = max_data + reserve;
|
2007-12-11 16:10:52 +03:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
return &to_base_frag(frag)->base;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
/**
|
|
|
|
* Prepare the dst buffer
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param peer (IN) BTL peer addressing
|
|
|
|
* prepare dest's behavior depends on the following:
|
|
|
|
* Has a valid memory registration been passed to prepare_src?
|
|
|
|
* if so we attempt to use the pre-registered user-buffer, if the memory registration
|
|
|
|
* is to small (only a portion of the user buffer) then we must reregister the user buffer
|
|
|
|
* Has the user requested the memory to be left pinned?
|
|
|
|
* if so we insert the memory registration into a memory tree for later lookup, we
|
|
|
|
* may also remove a previous registration if a MRU (most recently used) list of
|
|
|
|
* registrations is full, this prevents resources from being exhausted.
|
|
|
|
*/
|
|
|
|
mca_btl_base_descriptor_t* mca_btl_openib_prepare_dst(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
mca_mpool_base_registration_t* registration,
|
|
|
|
struct opal_convertor_t* convertor,
|
|
|
|
uint8_t order,
|
|
|
|
size_t reserve,
|
|
|
|
size_t* size,
|
|
|
|
uint32_t flags)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t *openib_btl;
|
|
|
|
mca_btl_openib_component_t *openib_component;
|
|
|
|
mca_btl_openib_com_frag_t *frag;
|
|
|
|
mca_btl_openib_reg_t *openib_reg;
|
|
|
|
uint32_t max_msg_sz;
|
|
|
|
int rc;
|
|
|
|
void *buffer;
|
|
|
|
|
|
|
|
openib_btl = (mca_btl_openib_module_t*)btl;
|
|
|
|
openib_component = (mca_btl_openib_component_t*)btl->btl_component;
|
|
|
|
|
|
|
|
frag = alloc_recv_user_frag();
|
|
|
|
if(NULL == frag) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* max_msg_sz is the maximum message size of the HCA (hw limitation)
|
|
|
|
set the minimum between local max_msg_sz and the remote */
|
|
|
|
max_msg_sz = MIN(openib_btl->ib_port_attr.max_msg_sz,
|
|
|
|
endpoint->endpoint_btl->ib_port_attr.max_msg_sz);
|
|
|
|
|
|
|
|
/* check if user has explicitly limited the max message size */
|
|
|
|
if (openib_component->max_hw_msg_size > 0 &&
|
|
|
|
max_msg_sz > (size_t)openib_component->max_hw_msg_size) {
|
|
|
|
max_msg_sz = openib_component->max_hw_msg_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* limit the message so to max_msg_sz */
|
|
|
|
if (*size > (size_t)max_msg_sz) {
|
|
|
|
*size = (size_t)max_msg_sz;
|
|
|
|
BTL_VERBOSE(("message size limited to %" PRIsize_t "\n", *size));
|
|
|
|
}
|
|
|
|
|
|
|
|
opal_convertor_get_current_pointer(convertor, &buffer);
|
|
|
|
|
|
|
|
if(NULL == registration){
|
|
|
|
/* we didn't get a memory registration passed in, so we have to
|
|
|
|
* register the region ourselves
|
|
|
|
*/
|
|
|
|
uint32_t mflags = 0;
|
|
|
|
#if OPAL_CUDA_GDR_SUPPORT
|
|
|
|
if (convertor->flags & CONVERTOR_CUDA) {
|
|
|
|
mflags |= MCA_MPOOL_FLAGS_CUDA_GPU_MEM;
|
|
|
|
}
|
|
|
|
#endif /* OPAL_CUDA_GDR_SUPPORT */
|
|
|
|
rc = btl->btl_mpool->mpool_register(btl->btl_mpool, buffer, *size, mflags,
|
|
|
|
®istration);
|
|
|
|
if(OPAL_SUCCESS != rc || NULL == registration) {
|
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
/* keep track of the registration we did */
|
|
|
|
frag->registration = (mca_btl_openib_reg_t*)registration;
|
|
|
|
}
|
|
|
|
openib_reg = (mca_btl_openib_reg_t*)registration;
|
|
|
|
|
|
|
|
frag->sg_entry.length = *size;
|
|
|
|
frag->sg_entry.lkey = openib_reg->mr->lkey;
|
|
|
|
frag->sg_entry.addr = (uint64_t)(uintptr_t)buffer;
|
|
|
|
|
|
|
|
to_base_frag(frag)->segment.base.seg_addr.lval = (uint64_t)(uintptr_t) buffer;
|
|
|
|
to_base_frag(frag)->segment.base.seg_len = *size;
|
|
|
|
to_base_frag(frag)->segment.key = openib_reg->mr->rkey;
|
|
|
|
to_base_frag(frag)->base.order = order;
|
|
|
|
to_base_frag(frag)->base.des_flags = flags;
|
|
|
|
|
|
|
|
BTL_VERBOSE(("frag->sg_entry.lkey = %" PRIu32 " .addr = %" PRIx64 " "
|
|
|
|
"rkey = %" PRIu32, frag->sg_entry.lkey, frag->sg_entry.addr,
|
|
|
|
openib_reg->mr->rkey));
|
|
|
|
|
|
|
|
return &to_base_frag(frag)->base;
|
|
|
|
}
|
|
|
|
|
2008-11-10 21:35:57 +03:00
|
|
|
static int mca_btl_openib_finalize_resources(struct mca_btl_base_module_t* btl) {
|
2008-01-21 15:11:18 +03:00
|
|
|
mca_btl_openib_module_t* openib_btl;
|
2007-05-09 01:47:21 +04:00
|
|
|
mca_btl_openib_endpoint_t* endpoint;
|
2007-12-23 15:29:34 +03:00
|
|
|
int ep_index, i;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int qp, rc = OPAL_SUCCESS;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
openib_btl = (mca_btl_openib_module_t*) btl;
|
2005-07-20 19:17:18 +04:00
|
|
|
|
2008-10-16 19:09:00 +04:00
|
|
|
/* Sanity check */
|
|
|
|
if( mca_btl_openib_component.ib_num_btls <= 0 ) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-10-16 19:09:00 +04:00
|
|
|
}
|
|
|
|
|
2007-05-09 01:47:21 +04:00
|
|
|
/* Release all QPs */
|
2013-04-10 01:55:31 +04:00
|
|
|
if (NULL != openib_btl->device->endpoints) {
|
|
|
|
for (ep_index=0;
|
|
|
|
ep_index < opal_pointer_array_get_size(openib_btl->device->endpoints);
|
|
|
|
ep_index++) {
|
|
|
|
endpoint=(mca_btl_openib_endpoint_t *)opal_pointer_array_get_item(openib_btl->device->endpoints,
|
|
|
|
ep_index);
|
|
|
|
if(!endpoint) {
|
|
|
|
BTL_VERBOSE(("In finalize, got another null endpoint"));
|
|
|
|
continue;
|
2007-12-23 16:58:31 +03:00
|
|
|
}
|
2013-04-10 01:55:31 +04:00
|
|
|
if(endpoint->endpoint_btl != openib_btl) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
for(i = 0; i < openib_btl->device->eager_rdma_buffers_count; i++) {
|
|
|
|
if(openib_btl->device->eager_rdma_buffers[i] == endpoint) {
|
|
|
|
openib_btl->device->eager_rdma_buffers[i] = NULL;
|
|
|
|
OBJ_RELEASE(endpoint);
|
|
|
|
}
|
|
|
|
}
|
2013-10-30 15:47:49 +04:00
|
|
|
opal_pointer_array_set_item(openib_btl->device->endpoints,
|
|
|
|
ep_index, NULL);
|
|
|
|
assert(((opal_object_t*)endpoint)->obj_reference_count == 1);
|
2013-04-10 01:55:31 +04:00
|
|
|
OBJ_RELEASE(endpoint);
|
2007-12-23 16:58:31 +03:00
|
|
|
}
|
2007-05-09 01:47:21 +04:00
|
|
|
}
|
2008-05-02 15:52:33 +04:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* Release SRQ resources */
|
2008-01-21 15:11:18 +03:00
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++) {
|
2007-11-28 10:20:26 +03:00
|
|
|
if(!BTL_OPENIB_QP_TYPE_PP(qp)) {
|
2010-02-18 12:48:16 +03:00
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(
|
|
|
|
&openib_btl->qps[qp].u.srq_qp.pending_frags[0]);
|
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(
|
|
|
|
&openib_btl->qps[qp].u.srq_qp.pending_frags[1]);
|
|
|
|
if (NULL != openib_btl->qps[qp].u.srq_qp.srq) {
|
|
|
|
opal_mutex_t *lock =
|
|
|
|
&mca_btl_openib_component.srq_manager.lock;
|
|
|
|
|
|
|
|
opal_hash_table_t *srq_addr_table =
|
|
|
|
&mca_btl_openib_component.srq_manager.srq_addr_table;
|
|
|
|
|
2011-07-04 18:00:41 +04:00
|
|
|
opal_mutex_lock(lock);
|
2010-02-18 12:48:16 +03:00
|
|
|
if (OPAL_SUCCESS !=
|
|
|
|
opal_hash_table_remove_value_ptr(srq_addr_table,
|
|
|
|
&openib_btl->qps[qp].u.srq_qp.srq,
|
|
|
|
sizeof(struct ibv_srq *))) {
|
|
|
|
BTL_VERBOSE(("Failed to remove SRQ %d entry from hash table.", qp));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2010-02-18 12:48:16 +03:00
|
|
|
}
|
|
|
|
opal_mutex_unlock(lock);
|
|
|
|
if (0 != ibv_destroy_srq(openib_btl->qps[qp].u.srq_qp.srq)) {
|
2007-11-28 10:18:59 +03:00
|
|
|
BTL_VERBOSE(("Failed to close SRQ %d", qp));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2007-11-28 10:18:59 +03:00
|
|
|
}
|
2010-02-18 12:48:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
OBJ_DESTRUCT(&openib_btl->qps[qp].u.srq_qp.pending_frags[0]);
|
|
|
|
OBJ_DESTRUCT(&openib_btl->qps[qp].u.srq_qp.pending_frags[1]);
|
2007-05-09 01:47:21 +04:00
|
|
|
}
|
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Finalize the CPC modules on this openib module */
|
|
|
|
for (i = 0; i < openib_btl->num_cpcs; ++i) {
|
|
|
|
if (NULL != openib_btl->cpcs[i]->cbm_finalize) {
|
|
|
|
openib_btl->cpcs[i]->cbm_finalize(openib_btl, openib_btl->cpcs[i]);
|
|
|
|
}
|
|
|
|
free(openib_btl->cpcs[i]);
|
|
|
|
}
|
|
|
|
free(openib_btl->cpcs);
|
|
|
|
|
2008-07-23 04:28:59 +04:00
|
|
|
/* Release device if there are no more users */
|
2008-11-10 21:35:57 +03:00
|
|
|
if(!(--openib_btl->device->btls)) {
|
2008-07-23 04:28:59 +04:00
|
|
|
OBJ_RELEASE(openib_btl->device);
|
2007-05-09 01:47:21 +04:00
|
|
|
}
|
2007-11-28 10:14:34 +03:00
|
|
|
|
2010-02-18 12:48:16 +03:00
|
|
|
if (NULL != openib_btl->qps) {
|
2011-07-04 18:00:41 +04:00
|
|
|
free(openib_btl->qps);
|
2010-02-18 12:48:16 +03:00
|
|
|
}
|
|
|
|
|
2008-11-10 21:35:57 +03:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int mca_btl_openib_finalize(struct mca_btl_base_module_t* btl)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t* openib_btl;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int i, rc = OPAL_SUCCESS;
|
2008-11-10 21:35:57 +03:00
|
|
|
|
|
|
|
openib_btl = (mca_btl_openib_module_t*) btl;
|
|
|
|
|
|
|
|
/* Sanity check */
|
|
|
|
if( mca_btl_openib_component.ib_num_btls <= 0 ) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/* Remove the btl from component list */
|
2013-10-30 15:47:49 +04:00
|
|
|
if ( mca_btl_openib_component.ib_num_btls > 0 ) {
|
2008-10-06 04:46:02 +04:00
|
|
|
for(i = 0; i < mca_btl_openib_component.ib_num_btls; i++){
|
|
|
|
if (mca_btl_openib_component.openib_btls[i] == openib_btl){
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if( OPAL_SUCCESS != (rc = mca_btl_openib_finalize_resources(btl) ) ) {
|
2013-10-30 15:47:49 +04:00
|
|
|
BTL_VERBOSE(("Failed to finalize resources"));
|
|
|
|
}
|
2008-10-06 04:46:02 +04:00
|
|
|
mca_btl_openib_component.openib_btls[i] =
|
|
|
|
mca_btl_openib_component.openib_btls[mca_btl_openib_component.ib_num_btls-1];
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mca_btl_openib_component.ib_num_btls--;
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
OBJ_DESTRUCT(&openib_btl->ib_lock);
|
2007-05-09 01:47:21 +04:00
|
|
|
free(openib_btl);
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
BTL_VERBOSE(("Success in closing BTL resources"));
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:14:34 +03:00
|
|
|
return rc;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
2009-03-25 19:53:26 +03:00
|
|
|
/*
|
|
|
|
* Send immediate - Minimum function calls minimum checks, send the data ASAP.
|
2011-07-04 18:00:41 +04:00
|
|
|
* If BTL can't to send the messages imidiate, it creates messages descriptor
|
2009-03-25 19:53:26 +03:00
|
|
|
* returns it to PML.
|
|
|
|
*/
|
|
|
|
int mca_btl_openib_sendi( struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* ep,
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
struct opal_convertor_t* convertor,
|
2009-03-25 19:53:26 +03:00
|
|
|
void* header,
|
|
|
|
size_t header_size,
|
|
|
|
size_t payload_size,
|
|
|
|
uint8_t order,
|
|
|
|
uint32_t flags,
|
|
|
|
mca_btl_base_tag_t tag,
|
2011-07-04 18:00:41 +04:00
|
|
|
mca_btl_base_descriptor_t** descriptor)
|
2009-03-25 19:53:26 +03:00
|
|
|
{
|
|
|
|
mca_btl_openib_module_t *obtl = (mca_btl_openib_module_t*)btl;
|
|
|
|
size_t size = payload_size + header_size;
|
2013-07-04 12:34:37 +04:00
|
|
|
int qp = frag_size_to_order(obtl, size),
|
2011-07-04 18:00:41 +04:00
|
|
|
prio = !(flags & MCA_BTL_DES_FLAGS_PRIORITY),
|
2009-03-25 19:53:26 +03:00
|
|
|
ib_rc;
|
|
|
|
bool do_rdma = false;
|
|
|
|
ompi_free_list_item_t* item = NULL;
|
|
|
|
mca_btl_openib_frag_t *frag;
|
|
|
|
mca_btl_openib_header_t *hdr;
|
2012-12-26 14:19:12 +04:00
|
|
|
int send_signaled;
|
2015-01-06 18:47:07 +03:00
|
|
|
int rc;
|
2009-03-25 19:53:26 +03:00
|
|
|
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
|
|
|
|
if (OPAL_UNLIKELY(MCA_BTL_IB_CONNECTED != ep->endpoint_state)) {
|
|
|
|
goto cant_send;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If it is pending messages on the qp - we can not send */
|
|
|
|
if(OPAL_UNLIKELY(!opal_list_is_empty(&ep->qps[qp].no_wqe_pending_frags[prio]))) {
|
|
|
|
goto cant_send;
|
|
|
|
}
|
|
|
|
|
2014-10-25 00:35:01 +04:00
|
|
|
#if OPAL_CUDA_GDR_SUPPORT
|
|
|
|
/* We do not want to use this path when we have GDR support */
|
|
|
|
if (convertor->flags & CONVERTOR_CUDA) {
|
|
|
|
goto cant_send;
|
|
|
|
}
|
|
|
|
#endif /* OPAL_CUDA_GDR_SUPPORT */
|
|
|
|
|
2009-03-25 19:53:26 +03:00
|
|
|
/* Allocate WQE */
|
|
|
|
if(OPAL_UNLIKELY(qp_get_wqe(ep, qp) < 0)) {
|
2015-01-06 18:47:07 +03:00
|
|
|
goto cant_send_wqe;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate fragment */
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_GET_MT(&obtl->device->qps[qp].send_free, item);
|
2009-03-25 19:53:26 +03:00
|
|
|
if(OPAL_UNLIKELY(NULL == item)) {
|
|
|
|
/* we don't return NULL because maybe later we will try to coalesce */
|
2015-01-06 18:47:07 +03:00
|
|
|
goto cant_send_wqe;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
frag = to_base_frag(item);
|
|
|
|
hdr = to_send_frag(item)->hdr;
|
2015-01-06 18:47:07 +03:00
|
|
|
|
|
|
|
/* eager rdma or send ? Check eager rdma credits */
|
|
|
|
/* Note: Maybe we want to implement isend only for eager rdma ?*/
|
|
|
|
rc = mca_btl_openib_endpoint_credit_acquire (ep, qp, prio, size, &do_rdma,
|
|
|
|
to_send_frag(frag), false);
|
|
|
|
if (OPAL_UNLIKELY(OPAL_SUCCESS != rc)) {
|
|
|
|
goto cant_send_frag;
|
|
|
|
}
|
|
|
|
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = size;
|
2009-03-25 19:53:26 +03:00
|
|
|
frag->base.order = qp;
|
|
|
|
frag->base.des_flags = flags;
|
|
|
|
hdr->tag = tag;
|
|
|
|
to_com_frag(item)->endpoint = ep;
|
|
|
|
|
|
|
|
/* put match header */
|
2012-06-21 21:09:12 +04:00
|
|
|
memcpy(frag->segment.base.seg_addr.pval, header, header_size);
|
2009-03-25 19:53:26 +03:00
|
|
|
|
|
|
|
/* Pack data */
|
|
|
|
if(payload_size) {
|
|
|
|
size_t max_data;
|
|
|
|
struct iovec iov;
|
|
|
|
uint32_t iov_count;
|
|
|
|
/* pack the data into the supplied buffer */
|
2012-06-21 21:09:12 +04:00
|
|
|
iov.iov_base = (IOVBASE_TYPE*)((unsigned char*)frag->segment.base.seg_addr.pval + header_size);
|
2009-03-25 19:53:26 +03:00
|
|
|
iov.iov_len = max_data = payload_size;
|
|
|
|
iov_count = 1;
|
|
|
|
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
(void)opal_convertor_pack( convertor, &iov, &iov_count, &max_data);
|
2009-03-25 19:53:26 +03:00
|
|
|
|
|
|
|
assert(max_data == payload_size);
|
|
|
|
}
|
|
|
|
|
2012-12-26 14:19:12 +04:00
|
|
|
#if BTL_OPENIB_FAILOVER_ENABLED
|
2013-01-13 14:11:03 +04:00
|
|
|
send_signaled = 1;
|
2012-12-26 14:19:12 +04:00
|
|
|
#else
|
|
|
|
send_signaled = qp_need_signal(ep, qp, payload_size + header_size, do_rdma);
|
|
|
|
#endif
|
|
|
|
ib_rc = post_send(ep, to_send_frag(item), do_rdma, send_signaled);
|
2009-03-25 19:53:26 +03:00
|
|
|
|
2015-01-06 18:47:07 +03:00
|
|
|
if (!ib_rc) {
|
2012-12-26 14:19:12 +04:00
|
|
|
if (0 == send_signaled) {
|
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag);
|
|
|
|
}
|
2011-01-19 23:58:22 +03:00
|
|
|
#if BTL_OPENIB_FAILOVER_ENABLED
|
2012-12-26 14:19:12 +04:00
|
|
|
else {
|
|
|
|
/* Return up in case needed for failover */
|
|
|
|
*descriptor = (struct mca_btl_base_descriptor_t *) frag;
|
|
|
|
}
|
2010-07-14 14:08:19 +04:00
|
|
|
#endif
|
2009-03-25 19:53:26 +03:00
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
2015-01-06 18:47:07 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Failed to send, do clean up all allocated resources */
|
2015-01-06 18:47:07 +03:00
|
|
|
if (ep->nbo) {
|
2009-03-25 19:53:26 +03:00
|
|
|
BTL_OPENIB_HEADER_NTOH(*hdr);
|
|
|
|
}
|
2015-01-06 18:47:07 +03:00
|
|
|
|
|
|
|
mca_btl_openib_endpoint_credit_release (ep, qp, do_rdma, to_send_frag(frag));
|
|
|
|
|
|
|
|
cant_send_frag:
|
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag);
|
|
|
|
cant_send_wqe:
|
|
|
|
qp_put_wqe (ep, qp);
|
2009-03-25 19:53:26 +03:00
|
|
|
cant_send:
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
/* We can not send the data directly, so we just return descriptor */
|
2014-11-20 09:22:43 +03:00
|
|
|
*descriptor = mca_btl_openib_alloc(btl, ep, order, size, flags);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_RESOURCE_BUSY;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
2005-07-01 01:28:35 +04:00
|
|
|
/*
|
2008-01-21 15:11:18 +03:00
|
|
|
* Initiate a send.
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
int mca_btl_openib_send(
|
2005-07-01 01:28:35 +04:00
|
|
|
struct mca_btl_base_module_t* btl,
|
2007-12-09 17:05:13 +03:00
|
|
|
struct mca_btl_base_endpoint_t* ep,
|
|
|
|
struct mca_btl_base_descriptor_t* des,
|
2005-07-01 01:28:35 +04:00
|
|
|
mca_btl_base_tag_t tag)
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2007-12-09 17:05:13 +03:00
|
|
|
mca_btl_openib_send_frag_t *frag;
|
|
|
|
|
|
|
|
assert(openib_frag_type(des) == MCA_BTL_OPENIB_FRAG_SEND ||
|
|
|
|
openib_frag_type(des) == MCA_BTL_OPENIB_FRAG_COALESCED);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
if(openib_frag_type(des) == MCA_BTL_OPENIB_FRAG_COALESCED) {
|
2014-11-05 00:26:17 +03:00
|
|
|
frag = to_coalesced_frag(des)->send_frag;
|
|
|
|
|
|
|
|
/* save coalesced fragment on a main fragment; we will need it after send
|
|
|
|
* completion to free it and to call upper layer callback */
|
|
|
|
opal_list_append(&frag->coalesced_frags, (opal_list_item_t*) des);
|
|
|
|
frag->coalesced_length += to_coalesced_frag(des)->hdr->alloc_size +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t);
|
|
|
|
|
|
|
|
to_coalesced_frag(des)->sent = true;
|
2007-12-09 17:05:13 +03:00
|
|
|
to_coalesced_frag(des)->hdr->tag = tag;
|
2014-11-20 09:22:43 +03:00
|
|
|
to_coalesced_frag(des)->hdr->size = des->des_local->seg_len;
|
2007-12-09 17:10:25 +03:00
|
|
|
if(ep->nbo)
|
|
|
|
BTL_OPENIB_HEADER_COALESCED_HTON(*to_coalesced_frag(des)->hdr);
|
2007-12-09 17:05:13 +03:00
|
|
|
} else {
|
|
|
|
frag = to_send_frag(des);
|
2008-01-21 15:11:18 +03:00
|
|
|
to_com_frag(des)->endpoint = ep;
|
2007-12-09 17:05:13 +03:00
|
|
|
frag->hdr->tag = tag;
|
|
|
|
}
|
2007-11-28 10:11:14 +03:00
|
|
|
|
2009-03-25 19:53:26 +03:00
|
|
|
des->des_flags |= MCA_BTL_DES_SEND_ALWAYS_CALLBACK;
|
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
return mca_btl_openib_endpoint_send(ep, frag);
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
/*
|
|
|
|
* RDMA WRITE local buffer to remote buffer address.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int mca_btl_openib_put( mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_endpoint_t* ep,
|
|
|
|
mca_btl_base_descriptor_t* descriptor)
|
2014-10-31 01:43:41 +03:00
|
|
|
{
|
2014-11-20 09:22:43 +03:00
|
|
|
mca_btl_openib_segment_t *src_seg = (mca_btl_openib_segment_t *) descriptor->des_local;
|
|
|
|
mca_btl_openib_segment_t *dst_seg = (mca_btl_openib_segment_t *) descriptor->des_remote;
|
|
|
|
struct ibv_send_wr* bad_wr;
|
|
|
|
mca_btl_openib_out_frag_t* frag = to_out_frag(descriptor);
|
|
|
|
int qp = descriptor->order;
|
|
|
|
uint64_t rem_addr = dst_seg->base.seg_addr.lval;
|
|
|
|
uint32_t rkey = dst_seg->key;
|
2014-10-31 01:43:41 +03:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
assert(openib_frag_type(frag) == MCA_BTL_OPENIB_FRAG_SEND_USER ||
|
|
|
|
openib_frag_type(frag) == MCA_BTL_OPENIB_FRAG_SEND);
|
|
|
|
|
|
|
|
descriptor->des_flags |= MCA_BTL_DES_SEND_ALWAYS_CALLBACK;
|
|
|
|
|
|
|
|
if(ep->endpoint_state != MCA_BTL_IB_CONNECTED) {
|
|
|
|
int rc;
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
rc = check_endpoint_state(ep, descriptor, &ep->pending_put_frags);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
if(OPAL_ERR_RESOURCE_BUSY == rc)
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
if(OPAL_SUCCESS != rc)
|
|
|
|
return rc;
|
2014-10-31 01:43:41 +03:00
|
|
|
}
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
if(MCA_BTL_NO_ORDER == qp)
|
|
|
|
qp = mca_btl_openib_component.rdma_qp;
|
|
|
|
|
|
|
|
/* check for a send wqe */
|
|
|
|
if (qp_get_wqe(ep, qp) < 0) {
|
|
|
|
qp_put_wqe(ep, qp);
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
opal_list_append(&ep->pending_put_frags, (opal_list_item_t*)frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
return OPAL_SUCCESS;
|
2014-10-31 01:43:41 +03:00
|
|
|
}
|
2014-11-20 09:22:43 +03:00
|
|
|
/* post descriptor */
|
|
|
|
#if OPAL_ENABLE_HETEROGENEOUS_SUPPORT
|
|
|
|
if((ep->endpoint_proc->proc_opal->proc_arch & OPAL_ARCH_ISBIGENDIAN)
|
|
|
|
!= (opal_proc_local_get()->proc_arch & OPAL_ARCH_ISBIGENDIAN)) {
|
|
|
|
rem_addr = opal_swap_bytes8(rem_addr);
|
|
|
|
rkey = opal_swap_bytes4(rkey);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
frag->sr_desc.wr.rdma.remote_addr = rem_addr;
|
|
|
|
frag->sr_desc.wr.rdma.rkey = rkey;
|
|
|
|
|
|
|
|
to_com_frag(frag)->sg_entry.addr = src_seg->base.seg_addr.lval;
|
|
|
|
to_com_frag(frag)->sg_entry.length = src_seg->base.seg_len;
|
|
|
|
to_com_frag(frag)->endpoint = ep;
|
|
|
|
#if HAVE_XRC
|
|
|
|
if (MCA_BTL_XRC_ENABLED && BTL_OPENIB_QP_TYPE_XRC(qp))
|
2015-01-07 07:27:25 +03:00
|
|
|
#if OPAL_HAVE_CONNECTX_XRC_DOMAINS
|
2014-12-09 12:43:15 +03:00
|
|
|
frag->sr_desc.qp_type.xrc.remote_srqn=ep->rem_info.rem_srqs[qp].rem_srq_num;
|
|
|
|
#else
|
2014-11-20 09:22:43 +03:00
|
|
|
frag->sr_desc.xrc_remote_srq_num=ep->rem_info.rem_srqs[qp].rem_srq_num;
|
2014-12-09 12:43:15 +03:00
|
|
|
#endif
|
2014-11-20 09:22:43 +03:00
|
|
|
#endif
|
2014-10-31 01:43:41 +03:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
descriptor->order = qp;
|
|
|
|
/* Setting opcode on a frag constructor isn't enough since prepare_src
|
|
|
|
* may return send_frag instead of put_frag */
|
|
|
|
frag->sr_desc.opcode = IBV_WR_RDMA_WRITE;
|
2014-12-09 12:43:15 +03:00
|
|
|
frag->sr_desc.send_flags = ib_send_flags(descriptor->des_local->seg_len, &(ep->qps[qp]), 1);
|
|
|
|
qp_inflight_wqe_to_frag(ep, qp, to_com_frag(frag));
|
|
|
|
qp_reset_signal_count(ep, qp);
|
2014-11-20 09:22:43 +03:00
|
|
|
|
|
|
|
qp_inflight_wqe_to_frag(ep, qp, to_com_frag(frag));
|
|
|
|
qp_reset_signal_count(ep, qp);
|
|
|
|
|
|
|
|
if(ibv_post_send(ep->qps[qp].qp->lcl_qp, &frag->sr_desc, &bad_wr))
|
|
|
|
return OPAL_ERROR;
|
|
|
|
|
|
|
|
return OPAL_SUCCESS;
|
2014-10-31 01:43:41 +03:00
|
|
|
}
|
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
/*
|
|
|
|
* RDMA READ remote buffer to local buffer address.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int mca_btl_openib_get(mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_endpoint_t* ep,
|
|
|
|
mca_btl_base_descriptor_t* descriptor)
|
2014-10-31 01:43:41 +03:00
|
|
|
{
|
2014-11-20 09:22:43 +03:00
|
|
|
mca_btl_openib_segment_t *src_seg = (mca_btl_openib_segment_t *) descriptor->des_remote;
|
|
|
|
mca_btl_openib_segment_t *dst_seg = (mca_btl_openib_segment_t *) descriptor->des_local;
|
|
|
|
struct ibv_send_wr* bad_wr;
|
|
|
|
mca_btl_openib_get_frag_t* frag = to_get_frag(descriptor);
|
|
|
|
int qp = descriptor->order;
|
|
|
|
uint64_t rem_addr = src_seg->base.seg_addr.lval;
|
|
|
|
uint32_t rkey = src_seg->key;
|
|
|
|
|
|
|
|
assert(openib_frag_type(frag) == MCA_BTL_OPENIB_FRAG_RECV_USER);
|
|
|
|
|
|
|
|
descriptor->des_flags |= MCA_BTL_DES_SEND_ALWAYS_CALLBACK;
|
|
|
|
|
|
|
|
if(ep->endpoint_state != MCA_BTL_IB_CONNECTED) {
|
|
|
|
int rc;
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
rc = check_endpoint_state(ep, descriptor, &ep->pending_get_frags);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
if(OPAL_ERR_RESOURCE_BUSY == rc)
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
if(OPAL_SUCCESS != rc)
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
if(MCA_BTL_NO_ORDER == qp)
|
|
|
|
qp = mca_btl_openib_component.rdma_qp;
|
|
|
|
|
|
|
|
/* check for a send wqe */
|
|
|
|
if (qp_get_wqe(ep, qp) < 0) {
|
|
|
|
qp_put_wqe(ep, qp);
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
opal_list_append(&ep->pending_get_frags, (opal_list_item_t*)frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
}
|
2014-10-31 01:43:41 +03:00
|
|
|
|
2014-11-20 09:22:43 +03:00
|
|
|
/* check for a get token */
|
|
|
|
if(OPAL_THREAD_ADD32(&ep->get_tokens,-1) < 0) {
|
|
|
|
qp_put_wqe(ep, qp);
|
|
|
|
OPAL_THREAD_ADD32(&ep->get_tokens,1);
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
opal_list_append(&ep->pending_get_frags, (opal_list_item_t*)frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if OPAL_ENABLE_HETEROGENEOUS_SUPPORT
|
|
|
|
if((ep->endpoint_proc->proc_opal->proc_arch & OPAL_ARCH_ISBIGENDIAN)
|
|
|
|
!= (opal_proc_local_get()->proc_arch & OPAL_ARCH_ISBIGENDIAN)) {
|
|
|
|
rem_addr = opal_swap_bytes8(rem_addr);
|
|
|
|
rkey = opal_swap_bytes4(rkey);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
frag->sr_desc.wr.rdma.remote_addr = rem_addr;
|
|
|
|
frag->sr_desc.wr.rdma.rkey = rkey;
|
|
|
|
|
|
|
|
to_com_frag(frag)->sg_entry.addr = dst_seg->base.seg_addr.lval;
|
|
|
|
to_com_frag(frag)->sg_entry.length = dst_seg->base.seg_len;
|
|
|
|
to_com_frag(frag)->endpoint = ep;
|
|
|
|
|
|
|
|
#if HAVE_XRC
|
|
|
|
if (MCA_BTL_XRC_ENABLED && BTL_OPENIB_QP_TYPE_XRC(qp))
|
2015-01-07 07:27:25 +03:00
|
|
|
#if OPAL_HAVE_CONNECTX_XRC_DOMAINS
|
2014-12-09 12:43:15 +03:00
|
|
|
frag->sr_desc.qp_type.xrc.remote_srqn=ep->rem_info.rem_srqs[qp].rem_srq_num;
|
|
|
|
#else
|
2014-11-20 09:22:43 +03:00
|
|
|
frag->sr_desc.xrc_remote_srq_num=ep->rem_info.rem_srqs[qp].rem_srq_num;
|
2014-12-09 12:43:15 +03:00
|
|
|
#endif
|
2014-11-20 09:22:43 +03:00
|
|
|
#endif
|
|
|
|
descriptor->order = qp;
|
|
|
|
|
|
|
|
qp_inflight_wqe_to_frag(ep, qp, to_com_frag(frag));
|
|
|
|
qp_reset_signal_count(ep, qp);
|
|
|
|
|
|
|
|
if(ibv_post_send(ep->qps[qp].qp->lcl_qp, &frag->sr_desc, &bad_wr))
|
|
|
|
return OPAL_ERROR;
|
2014-10-31 01:43:41 +03:00
|
|
|
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2010-03-13 02:57:50 +03:00
|
|
|
#if OPAL_ENABLE_FT_CR == 0
|
2007-03-17 02:11:45 +03:00
|
|
|
int mca_btl_openib_ft_event(int state) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2008-10-16 19:09:00 +04:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
int mca_btl_openib_ft_event(int state) {
|
|
|
|
int i;
|
|
|
|
|
2007-03-17 02:11:45 +03:00
|
|
|
if(OPAL_CRS_CHECKPOINT == state) {
|
2008-10-16 19:09:00 +04:00
|
|
|
/* Continue must reconstruct the routes (including modex), since we
|
|
|
|
* have to tear down the devices completely. */
|
2012-06-27 05:28:28 +04:00
|
|
|
orte_cr_continue_like_restart = true;
|
2008-10-16 19:09:00 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* To keep the node from crashing we need to call ibv_close_device
|
|
|
|
* before the checkpoint is taken. To do this we need to tear
|
|
|
|
* everything down, and rebuild it all on continue/restart. :(
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Shutdown all modules
|
|
|
|
* - Do this backwards since the openib_finalize function also loops
|
|
|
|
* over this variable.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < mca_btl_openib_component.ib_num_btls; ++i ) {
|
2008-11-10 21:35:57 +03:00
|
|
|
mca_btl_openib_finalize_resources( &(mca_btl_openib_component.openib_btls[i])->super);
|
2008-10-16 19:09:00 +04:00
|
|
|
}
|
2008-11-10 21:35:57 +03:00
|
|
|
|
|
|
|
mca_btl_openib_component.devices_count = 0;
|
|
|
|
mca_btl_openib_component.ib_num_btls = 0;
|
|
|
|
OBJ_DESTRUCT(&mca_btl_openib_component.ib_procs);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_finalize();
|
2007-03-17 02:11:45 +03:00
|
|
|
}
|
|
|
|
else if(OPAL_CRS_CONTINUE == state) {
|
2008-10-16 19:09:00 +04:00
|
|
|
; /* Cleared by forcing the modex, no work needed */
|
2007-03-17 02:11:45 +03:00
|
|
|
}
|
|
|
|
else if(OPAL_CRS_RESTART == state) {
|
|
|
|
;
|
|
|
|
}
|
|
|
|
else if(OPAL_CRS_TERM == state ) {
|
|
|
|
;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
;
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2007-03-17 02:11:45 +03:00
|
|
|
}
|
2008-10-16 19:09:00 +04:00
|
|
|
|
2010-03-13 02:57:50 +03:00
|
|
|
#endif /* OPAL_ENABLE_FT_CR */
|