2005-07-01 01:28:35 +04:00
|
|
|
/*
|
2005-11-05 22:57:48 +03:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2013-07-04 12:34:37 +04:00
|
|
|
* Copyright (c) 2004-2013 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2008-01-21 15:11:18 +03:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
2005-07-01 01:28:35 +04:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2009-03-24 02:52:05 +03:00
|
|
|
* Copyright (c) 2009 IBM Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2006-2009 Los Alamos National Security, LLC. All rights
|
2008-01-21 15:11:18 +03:00
|
|
|
* reserved.
|
2007-09-24 14:11:52 +04:00
|
|
|
* Copyright (c) 2006-2007 Voltaire All rights reserved.
|
2012-03-01 21:29:40 +04:00
|
|
|
* Copyright (c) 2010-2012 Oracle and/or its affiliates. All rights reserved.
|
2005-07-01 01:28:35 +04:00
|
|
|
* $COPYRIGHT$
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* Additional copyrights may follow
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef MCA_BTL_IB_FRAG_H
|
|
|
|
#define MCA_BTL_IB_FRAG_H
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
2012-03-01 21:29:40 +04:00
|
|
|
#include "opal/align.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/btl/btl.h"
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
#include <infiniband/verbs.h>
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2009-08-20 15:42:18 +04:00
|
|
|
BEGIN_C_DECLS
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2006-12-17 15:26:41 +03:00
|
|
|
struct mca_btl_openib_reg_t;
|
|
|
|
|
2005-11-10 23:15:02 +03:00
|
|
|
struct mca_btl_openib_header_t {
|
|
|
|
mca_btl_base_tag_t tag;
|
2007-08-01 16:10:56 +04:00
|
|
|
uint8_t cm_seen;
|
2006-09-05 20:02:09 +04:00
|
|
|
uint16_t credits;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
uint8_t padding[4];
|
|
|
|
#endif
|
2005-11-10 23:15:02 +03:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_header_t mca_btl_openib_header_t;
|
2006-09-05 20:02:09 +04:00
|
|
|
#define BTL_OPENIB_RDMA_CREDITS_FLAG (1<<15)
|
|
|
|
#define BTL_OPENIB_IS_RDMA_CREDITS(I) ((I)&BTL_OPENIB_RDMA_CREDITS_FLAG)
|
|
|
|
#define BTL_OPENIB_CREDITS(I) ((I)&~BTL_OPENIB_RDMA_CREDITS_FLAG)
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define BTL_OPENIB_HEADER_HTON(h) \
|
|
|
|
do { \
|
|
|
|
(h).credits = htons((h).credits); \
|
2006-12-17 03:50:59 +03:00
|
|
|
} while (0)
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define BTL_OPENIB_HEADER_NTOH(h) \
|
|
|
|
do { \
|
|
|
|
(h).credits = ntohs((h).credits); \
|
2006-12-17 03:50:59 +03:00
|
|
|
} while (0)
|
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
typedef struct mca_btl_openib_header_coalesced_t {
|
|
|
|
mca_btl_base_tag_t tag;
|
|
|
|
uint32_t size;
|
|
|
|
uint32_t alloc_size;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
uint8_t padding[4];
|
|
|
|
#endif
|
2007-12-09 17:05:13 +03:00
|
|
|
} mca_btl_openib_header_coalesced_t;
|
2006-12-17 03:50:59 +03:00
|
|
|
|
2007-12-09 17:10:25 +03:00
|
|
|
#define BTL_OPENIB_HEADER_COALESCED_NTOH(h) \
|
|
|
|
do { \
|
|
|
|
(h).size = ntohl((h).size); \
|
|
|
|
(h).alloc_size = ntohl((h).alloc_size); \
|
|
|
|
} while(0)
|
|
|
|
|
|
|
|
#define BTL_OPENIB_HEADER_COALESCED_HTON(h) \
|
|
|
|
do { \
|
|
|
|
(h).size = htonl((h).size); \
|
|
|
|
(h).alloc_size = htonl((h).alloc_size); \
|
|
|
|
} while(0)
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
/* BTL_OPENIB_FTR_PADDING
|
|
|
|
* This macro is used to keep the pointer to openib footers aligned for
|
|
|
|
* systems like SPARC64 that take a big performance hit when addresses
|
|
|
|
* are not aligned (and by default sigbus instead of coercing the type on
|
|
|
|
* an unaligned address).
|
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* We assure alignment of a packet's structures when OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
* is set to 1. When this is the case then several structures are padded
|
|
|
|
* to assure alignment and the mca_btl_openib_footer_t structure itself
|
|
|
|
* will uses the BTL_OPENIB_FTR_PADDING macro to shift the location of the
|
|
|
|
* pointer to assure proper alignment after the PML Header and data.
|
|
|
|
* For example sending a 1 byte data packet the memory layout without
|
|
|
|
* footer alignment would look something like the following:
|
|
|
|
*
|
|
|
|
* 0x00 : mca_btl_openib_coalesced_header_t (12 bytes + 4 byte pad)
|
|
|
|
* 0x10 : mca_btl_openib_control_header_t (1 byte + 7 byte pad)
|
|
|
|
* 0x18 : mca_btl_openib_header_t (4 bytes + 4 byte pad)
|
|
|
|
* 0x20 : PML Header and data (16 bytes PML + 1 byte data)
|
|
|
|
* 0x29 : mca_btl_openib_footer_t (4 bytes + 4 byte pad)
|
|
|
|
* 0x31 : end of packet
|
|
|
|
*
|
|
|
|
* By applying the BTL_OPENIB_FTR_PADDING() in the progress_one_device
|
|
|
|
* and post_send routines we adjust the pointer to mca_btl_openib_footer_t
|
|
|
|
* from 0x29 to 0x2C thus correctly aligning the start of the
|
|
|
|
* footer pointer. This adjustment will cause the padding field of
|
|
|
|
* mca_btl_openib_footer_t to overlap with the neighboring memory but since
|
|
|
|
* we never use the padding we do not end up inadvertently overwriting
|
|
|
|
* memory that does not belong to the fragment.
|
|
|
|
*/
|
|
|
|
#define BTL_OPENIB_FTR_PADDING(size) \
|
|
|
|
OPAL_ALIGN_PAD_AMOUNT(size, sizeof(uint64_t))
|
|
|
|
|
|
|
|
/* BTL_OPENIB_ALIGN_COALESCE_HDR
|
|
|
|
* This macro is used in btl_openib.c, while creating a coalesce fragment,
|
|
|
|
* to align the coalesce headers.
|
|
|
|
*/
|
|
|
|
#define BTL_OPENIB_ALIGN_COALESCE_HDR(ptr) \
|
|
|
|
OPAL_ALIGN_PTR(ptr, sizeof(uint32_t), unsigned char*)
|
|
|
|
|
|
|
|
/* BTL_OPENIB_COALESCE_HDR_PADDING
|
|
|
|
* This macro is used in btl_openib_component.c, while parsing an incoming
|
|
|
|
* coalesce fragment, to determine the padding amount used to align the
|
|
|
|
* mca_btl_openib_coalesce_hdr_t.
|
|
|
|
*/
|
|
|
|
#define BTL_OPENIB_COALESCE_HDR_PADDING(ptr) \
|
|
|
|
OPAL_ALIGN_PAD_AMOUNT(ptr, sizeof(uint32_t))
|
|
|
|
#else
|
|
|
|
#define BTL_OPENIB_FTR_PADDING(size) 0
|
|
|
|
#define BTL_OPENIB_ALIGN_COALESCE_HDR(ptr) ptr
|
|
|
|
#define BTL_OPENIB_COALESCE_HDR_PADDING(ptr) 0
|
|
|
|
#endif
|
|
|
|
|
2006-03-26 12:30:50 +04:00
|
|
|
struct mca_btl_openib_footer_t {
|
2009-05-07 00:11:28 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
2006-03-26 12:30:50 +04:00
|
|
|
uint32_t seq;
|
|
|
|
#endif
|
|
|
|
union {
|
|
|
|
uint32_t size;
|
|
|
|
uint8_t buf[4];
|
|
|
|
} u;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
|
|
|
/* this footer needs to be of a 8-byte multiple so by adding the
|
|
|
|
* seq field you throw this off and you cannot just remove the
|
|
|
|
* padding because the padding is needed in order to adjust the alignment
|
|
|
|
* and not overwrite other packets.
|
|
|
|
*/
|
|
|
|
uint8_t padding[12];
|
|
|
|
#else
|
|
|
|
uint8_t padding[8];
|
|
|
|
#endif
|
|
|
|
#endif
|
2006-03-26 12:30:50 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_footer_t mca_btl_openib_footer_t;
|
|
|
|
|
2007-01-13 02:14:45 +03:00
|
|
|
#ifdef WORDS_BIGENDIAN
|
|
|
|
#define MCA_BTL_OPENIB_FTR_SIZE_REVERSE(ftr)
|
2008-01-21 15:11:18 +03:00
|
|
|
#else
|
2007-01-13 02:14:45 +03:00
|
|
|
#define MCA_BTL_OPENIB_FTR_SIZE_REVERSE(ftr) \
|
|
|
|
do { \
|
|
|
|
uint8_t tmp = (ftr).u.buf[0]; \
|
|
|
|
(ftr).u.buf[0]=(ftr).u.buf[2]; \
|
|
|
|
(ftr).u.buf[2]=tmp; \
|
|
|
|
} while (0)
|
|
|
|
#endif
|
2006-12-17 03:50:59 +03:00
|
|
|
|
2009-05-07 00:11:28 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
2007-12-09 17:10:25 +03:00
|
|
|
#define BTL_OPENIB_FOOTER_SEQ_HTON(h) ((h).seq = htonl((h).seq))
|
|
|
|
#define BTL_OPENIB_FOOTER_SEQ_NTOH(h) ((h).seq = ntohl((h).seq))
|
2006-12-17 03:50:59 +03:00
|
|
|
#else
|
2007-12-09 17:10:25 +03:00
|
|
|
#define BTL_OPENIB_FOOTER_SEQ_HTON(h)
|
|
|
|
#define BTL_OPENIB_FOOTER_SEQ_NTOH(h)
|
|
|
|
#endif
|
|
|
|
|
2007-01-13 02:14:45 +03:00
|
|
|
#define BTL_OPENIB_FOOTER_HTON(h) \
|
|
|
|
do { \
|
2007-12-09 17:10:25 +03:00
|
|
|
BTL_OPENIB_FOOTER_SEQ_HTON(h); \
|
2007-01-13 02:14:45 +03:00
|
|
|
MCA_BTL_OPENIB_FTR_SIZE_REVERSE(h); \
|
|
|
|
} while (0)
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-01-13 02:14:45 +03:00
|
|
|
#define BTL_OPENIB_FOOTER_NTOH(h) \
|
|
|
|
do { \
|
2007-12-09 17:10:25 +03:00
|
|
|
BTL_OPENIB_FOOTER_SEQ_NTOH(h); \
|
2007-01-13 02:14:45 +03:00
|
|
|
MCA_BTL_OPENIB_FTR_SIZE_REVERSE(h); \
|
|
|
|
} while (0)
|
2006-12-17 03:50:59 +03:00
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
#define MCA_BTL_OPENIB_CONTROL_CREDITS 0
|
|
|
|
#define MCA_BTL_OPENIB_CONTROL_RDMA 1
|
|
|
|
#define MCA_BTL_OPENIB_CONTROL_COALESCED 2
|
2008-10-06 04:46:02 +04:00
|
|
|
#define MCA_BTL_OPENIB_CONTROL_CTS 3
|
2011-01-19 23:58:22 +03:00
|
|
|
#if BTL_OPENIB_FAILOVER_ENABLED
|
2010-07-14 14:08:19 +04:00
|
|
|
#define MCA_BTL_OPENIB_CONTROL_EP_BROKEN 4
|
|
|
|
#define MCA_BTL_OPENIB_CONTROL_EP_EAGER_RDMA_ERROR 5
|
|
|
|
#endif
|
2006-03-26 12:30:50 +04:00
|
|
|
|
|
|
|
struct mca_btl_openib_control_header_t {
|
2009-03-24 02:52:05 +03:00
|
|
|
uint8_t type;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
uint8_t padding[7];
|
2009-03-24 02:52:05 +03:00
|
|
|
#endif
|
2006-03-26 12:30:50 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_control_header_t mca_btl_openib_control_header_t;
|
|
|
|
|
|
|
|
struct mca_btl_openib_eager_rdma_header_t {
|
2007-01-13 02:14:45 +03:00
|
|
|
mca_btl_openib_control_header_t control;
|
|
|
|
uint32_t rkey;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_ptr_t rdma_start;
|
2006-03-26 12:30:50 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_eager_rdma_header_t mca_btl_openib_eager_rdma_header_t;
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-01-13 02:14:45 +03:00
|
|
|
#define BTL_OPENIB_EAGER_RDMA_CONTROL_HEADER_HTON(h) \
|
|
|
|
do { \
|
2007-11-28 10:12:44 +03:00
|
|
|
(h).rkey = htonl((h).rkey); \
|
|
|
|
(h).rdma_start.lval = hton64((h).rdma_start.lval); \
|
2007-01-13 02:14:45 +03:00
|
|
|
} while (0)
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
#define BTL_OPENIB_EAGER_RDMA_CONTROL_HEADER_NTOH(h) \
|
|
|
|
do { \
|
|
|
|
(h).rkey = ntohl((h).rkey); \
|
|
|
|
(h).rdma_start.lval = ntoh64((h).rdma_start.lval); \
|
2007-01-13 02:14:45 +03:00
|
|
|
} while (0)
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
|
2006-09-05 20:02:09 +04:00
|
|
|
struct mca_btl_openib_rdma_credits_header_t {
|
|
|
|
mca_btl_openib_control_header_t control;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_OPENIB_PAD_HDR
|
2012-03-01 21:29:40 +04:00
|
|
|
uint8_t padding[1];
|
|
|
|
#endif
|
2007-11-28 10:12:44 +03:00
|
|
|
uint8_t qpn;
|
2006-09-05 20:02:09 +04:00
|
|
|
uint16_t rdma_credits;
|
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_rdma_credits_header_t mca_btl_openib_rdma_credits_header_t;
|
|
|
|
|
2006-12-17 03:50:59 +03:00
|
|
|
#define BTL_OPENIB_RDMA_CREDITS_HEADER_HTON(h) \
|
|
|
|
do { \
|
2007-11-28 10:12:44 +03:00
|
|
|
(h).rdma_credits = htons((h).rdma_credits); \
|
2006-12-17 03:50:59 +03:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BTL_OPENIB_RDMA_CREDITS_HEADER_NTOH(h) \
|
|
|
|
do { \
|
2007-11-28 10:11:14 +03:00
|
|
|
(h).rdma_credits = ntohs((h).rdma_credits); \
|
2006-12-17 03:50:59 +03:00
|
|
|
} while (0)
|
|
|
|
|
2011-01-19 23:58:22 +03:00
|
|
|
#if BTL_OPENIB_FAILOVER_ENABLED
|
2010-07-14 14:08:19 +04:00
|
|
|
struct mca_btl_openib_broken_connection_header_t {
|
|
|
|
mca_btl_openib_control_header_t control;
|
|
|
|
uint32_t lid;
|
|
|
|
uint64_t subnet_id;
|
|
|
|
uint32_t vpid;
|
|
|
|
uint32_t index; /* for eager RDMA only */
|
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_broken_connection_header_t mca_btl_openib_broken_connection_header_t;
|
2011-01-19 22:58:35 +03:00
|
|
|
|
|
|
|
#define BTL_OPENIB_BROKEN_CONNECTION_HEADER_HTON(h) \
|
|
|
|
do { \
|
|
|
|
(h).lid = htonl((h).lid); \
|
|
|
|
(h).subnet_id = hton64((h).subnet_id); \
|
|
|
|
(h).vpid = htonl((h).vpid); \
|
|
|
|
(h).index = htonl((h).index); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BTL_OPENIB_BROKEN_CONNECTION_HEADER_NTOH(h) \
|
|
|
|
do { \
|
|
|
|
(h).lid = ntohl((h).lid); \
|
|
|
|
(h).subnet_id = ntoh64((h).subnet_id); \
|
|
|
|
(h).vpid = ntohl((h).vpid); \
|
|
|
|
(h).index = ntohl((h).index); \
|
|
|
|
} while (0)
|
2010-07-14 14:08:19 +04:00
|
|
|
#endif
|
2006-03-30 19:26:21 +04:00
|
|
|
enum mca_btl_openib_frag_type_t {
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
MCA_BTL_OPENIB_FRAG_RECV,
|
|
|
|
MCA_BTL_OPENIB_FRAG_RECV_USER,
|
|
|
|
MCA_BTL_OPENIB_FRAG_SEND,
|
|
|
|
MCA_BTL_OPENIB_FRAG_SEND_USER,
|
2006-07-27 18:09:30 +04:00
|
|
|
MCA_BTL_OPENIB_FRAG_EAGER_RDMA,
|
2007-12-09 17:05:13 +03:00
|
|
|
MCA_BTL_OPENIB_FRAG_CONTROL,
|
|
|
|
MCA_BTL_OPENIB_FRAG_COALESCED
|
2006-03-30 19:26:21 +04:00
|
|
|
};
|
|
|
|
typedef enum mca_btl_openib_frag_type_t mca_btl_openib_frag_type_t;
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define openib_frag_type(f) (to_base_frag(f)->type)
|
2005-07-01 01:28:35 +04:00
|
|
|
/**
|
2007-11-28 10:11:14 +03:00
|
|
|
* IB fragment derived type.
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
2007-11-28 10:11:14 +03:00
|
|
|
|
2012-06-21 21:09:12 +04:00
|
|
|
typedef struct mca_btl_openib_segment_t {
|
|
|
|
mca_btl_base_segment_t base;
|
|
|
|
uint32_t key;
|
|
|
|
} mca_btl_openib_segment_t;
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
/* base openib frag */
|
|
|
|
typedef struct mca_btl_openib_frag_t {
|
2008-01-21 15:11:18 +03:00
|
|
|
mca_btl_base_descriptor_t base;
|
2012-06-21 21:09:12 +04:00
|
|
|
mca_btl_openib_segment_t segment;
|
2008-01-21 15:11:18 +03:00
|
|
|
mca_btl_openib_frag_type_t type;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
ompi_free_list_t* list;
|
2007-11-28 10:11:14 +03:00
|
|
|
} mca_btl_openib_frag_t;
|
2006-08-24 20:38:08 +04:00
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_frag_t);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define to_base_frag(f) ((mca_btl_openib_frag_t*)(f))
|
|
|
|
|
|
|
|
/* frag used for communication */
|
|
|
|
typedef struct mca_btl_openib_com_frag_t {
|
|
|
|
mca_btl_openib_frag_t super;
|
|
|
|
struct ibv_sge sg_entry;
|
|
|
|
struct mca_btl_openib_reg_t *registration;
|
2008-01-21 15:11:18 +03:00
|
|
|
struct mca_btl_base_endpoint_t *endpoint;
|
2012-12-26 14:19:12 +04:00
|
|
|
/* number of unsignaled frags sent before this frag. */
|
|
|
|
uint32_t n_wqes_inflight;
|
2007-11-28 10:11:14 +03:00
|
|
|
} mca_btl_openib_com_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_com_frag_t);
|
|
|
|
|
|
|
|
#define to_com_frag(f) ((mca_btl_openib_com_frag_t*)(f))
|
|
|
|
|
|
|
|
typedef struct mca_btl_openib_out_frag_t {
|
|
|
|
mca_btl_openib_com_frag_t super;
|
2008-01-21 15:11:18 +03:00
|
|
|
struct ibv_send_wr sr_desc;
|
2007-11-28 10:11:14 +03:00
|
|
|
} mca_btl_openib_out_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_out_frag_t);
|
|
|
|
|
|
|
|
#define to_out_frag(f) ((mca_btl_openib_out_frag_t*)(f))
|
|
|
|
|
|
|
|
typedef struct mca_btl_openib_com_frag_t mca_btl_openib_in_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_in_frag_t);
|
|
|
|
|
|
|
|
#define to_in_frag(f) ((mca_btl_openib_in_frag_t*)(f))
|
|
|
|
|
|
|
|
typedef struct mca_btl_openib_send_frag_t {
|
|
|
|
mca_btl_openib_out_frag_t super;
|
2007-12-09 17:05:13 +03:00
|
|
|
mca_btl_openib_header_t *hdr, *chdr;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_footer_t *ftr;
|
|
|
|
uint8_t qp_idx;
|
2007-12-09 17:05:13 +03:00
|
|
|
uint32_t coalesced_length;
|
|
|
|
opal_list_t coalesced_frags;
|
2007-11-28 10:11:14 +03:00
|
|
|
} mca_btl_openib_send_frag_t;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_send_frag_t);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define to_send_frag(f) ((mca_btl_openib_send_frag_t*)(f))
|
2007-02-28 16:43:38 +03:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
typedef struct mca_btl_openib_recv_frag_t {
|
|
|
|
mca_btl_openib_in_frag_t super;
|
|
|
|
mca_btl_openib_header_t *hdr;
|
|
|
|
mca_btl_openib_footer_t *ftr;
|
|
|
|
struct ibv_recv_wr rd_desc;
|
|
|
|
uint8_t qp_idx;
|
|
|
|
} mca_btl_openib_recv_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_recv_frag_t);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define to_recv_frag(f) ((mca_btl_openib_recv_frag_t*)(f))
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2014-10-31 01:43:41 +03:00
|
|
|
typedef struct mca_btl_openib_put_frag_t {
|
|
|
|
mca_btl_openib_out_frag_t super;
|
|
|
|
struct {
|
|
|
|
mca_btl_base_rdma_completion_fn_t func;
|
|
|
|
mca_btl_base_registration_handle_t *local_handle;
|
|
|
|
void *context;
|
|
|
|
void *data;
|
|
|
|
} cb;
|
|
|
|
} mca_btl_openib_put_frag_t;
|
2007-11-28 10:11:14 +03:00
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_put_frag_t);
|
|
|
|
|
|
|
|
#define to_put_frag(f) ((mca_btl_openib_put_frag_t*)(f))
|
|
|
|
|
|
|
|
typedef struct mca_btl_openib_get_frag_t {
|
|
|
|
mca_btl_openib_in_frag_t super;
|
2008-01-21 15:11:18 +03:00
|
|
|
struct ibv_send_wr sr_desc;
|
2014-10-31 01:43:41 +03:00
|
|
|
struct {
|
|
|
|
mca_btl_base_rdma_completion_fn_t func;
|
|
|
|
mca_btl_base_registration_handle_t *local_handle;
|
|
|
|
void *context;
|
|
|
|
void *data;
|
|
|
|
} cb;
|
2007-11-28 10:11:14 +03:00
|
|
|
} mca_btl_openib_get_frag_t;
|
2008-01-21 15:11:18 +03:00
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_get_frag_t);
|
2007-11-28 10:11:14 +03:00
|
|
|
|
|
|
|
#define to_get_frag(f) ((mca_btl_openib_get_frag_t*)(f))
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
typedef struct mca_btl_openib_send_frag_t mca_btl_openib_send_control_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_send_control_frag_t);
|
2006-07-27 18:09:30 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define to_send_control_frag(f) ((mca_btl_openib_send_control_frag_t*)(f))
|
2007-12-09 17:05:13 +03:00
|
|
|
|
|
|
|
typedef struct mca_btl_openib_coalesced_frag_t {
|
|
|
|
mca_btl_openib_frag_t super;
|
|
|
|
mca_btl_openib_send_frag_t *send_frag;
|
|
|
|
mca_btl_openib_header_coalesced_t *hdr;
|
2014-11-05 00:26:17 +03:00
|
|
|
bool sent;
|
2007-12-09 17:05:13 +03:00
|
|
|
} mca_btl_openib_coalesced_frag_t;
|
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_coalesced_frag_t);
|
|
|
|
|
|
|
|
#define to_coalesced_frag(f) ((mca_btl_openib_coalesced_frag_t*)(f))
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
/*
|
|
|
|
* Allocate an IB send descriptor
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
static inline mca_btl_openib_send_control_frag_t *
|
2008-10-06 04:46:02 +04:00
|
|
|
alloc_control_frag(mca_btl_openib_module_t *btl)
|
2008-01-09 13:26:21 +03:00
|
|
|
{
|
|
|
|
ompi_free_list_item_t *item;
|
|
|
|
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_WAIT_MT(&btl->device->send_free_control, item);
|
2008-01-09 13:26:21 +03:00
|
|
|
|
|
|
|
return to_send_control_frag(item);
|
|
|
|
}
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
static inline uint8_t frag_size_to_order(mca_btl_openib_module_t* btl,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
int qp;
|
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++)
|
|
|
|
if(mca_btl_openib_component.qp_infos[qp].size >= size)
|
|
|
|
return qp;
|
|
|
|
|
|
|
|
return MCA_BTL_NO_ORDER;
|
|
|
|
}
|
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
static inline mca_btl_openib_com_frag_t *alloc_send_user_frag(void)
|
|
|
|
{
|
|
|
|
ompi_free_list_item_t *item;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_GET_MT(&mca_btl_openib_component.send_user_free, item);
|
2007-02-28 16:43:38 +03:00
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
return to_com_frag(item);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline mca_btl_openib_com_frag_t *alloc_recv_user_frag(void)
|
|
|
|
{
|
|
|
|
ompi_free_list_item_t *item;
|
|
|
|
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_GET_MT(&mca_btl_openib_component.recv_user_free, item);
|
2008-01-09 13:26:21 +03:00
|
|
|
|
|
|
|
return to_com_frag(item);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline mca_btl_openib_coalesced_frag_t *alloc_coalesced_frag(void)
|
|
|
|
{
|
|
|
|
ompi_free_list_item_t *item;
|
|
|
|
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_GET_MT(&mca_btl_openib_component.send_free_coalesced, item);
|
2008-01-09 13:26:21 +03:00
|
|
|
|
|
|
|
return to_coalesced_frag(item);
|
|
|
|
}
|
2007-12-09 17:05:13 +03:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define MCA_BTL_IB_FRAG_RETURN(frag) \
|
2007-10-23 17:33:19 +04:00
|
|
|
do { \
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_RETURN_MT(to_base_frag(frag)->list, \
|
2007-10-23 17:33:19 +04:00
|
|
|
(ompi_free_list_item_t*)(frag)); \
|
|
|
|
} while(0);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
#define MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(list) \
|
|
|
|
while(!opal_list_is_empty(list)){ \
|
|
|
|
opal_list_item_t *frag_item; \
|
|
|
|
frag_item = opal_list_remove_first(list); \
|
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag_item); \
|
|
|
|
} \
|
2007-05-09 01:47:21 +04:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
struct mca_btl_openib_module_t;
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
struct mca_btl_openib_frag_init_data_t {
|
|
|
|
uint8_t order;
|
|
|
|
ompi_free_list_t* list;
|
|
|
|
};
|
|
|
|
typedef struct mca_btl_openib_frag_init_data_t mca_btl_openib_frag_init_data_t;
|
|
|
|
|
|
|
|
void mca_btl_openib_frag_init(ompi_free_list_item_t* item, void* ctx);
|
|
|
|
|
|
|
|
|
2009-08-20 15:42:18 +04:00
|
|
|
END_C_DECLS
|
2005-07-01 01:28:35 +04:00
|
|
|
#endif
|