2015-01-06 18:48:33 +03:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2005-07-01 01:28:35 +04:00
|
|
|
/*
|
2005-11-05 22:57:48 +03:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2013-07-04 12:34:37 +04:00
|
|
|
* Copyright (c) 2004-2013 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2008-01-21 15:11:18 +03:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
2005-07-01 01:28:35 +04:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2009-02-14 18:21:28 +03:00
|
|
|
* Copyright (c) 2007-2009 Cisco Systems, Inc. All rights reserved.
|
2015-01-06 18:48:33 +03:00
|
|
|
* Copyright (c) 2006-2014 Los Alamos National Security, LLC. All rights
|
2008-01-21 15:11:18 +03:00
|
|
|
* reserved.
|
2007-09-24 14:11:52 +04:00
|
|
|
* Copyright (c) 2006-2007 Voltaire All rights reserved.
|
2013-02-12 21:45:27 +04:00
|
|
|
* Copyright (c) 2007-2009 Mellanox Technologies. All rights reserved.
|
2012-03-01 21:29:40 +04:00
|
|
|
* Copyright (c) 2010-2012 Oracle and/or its affiliates. All rights reserved.
|
2014-12-09 12:43:15 +03:00
|
|
|
* Copyright (c) 2014 Bull SAS. All rights reserved.
|
|
|
|
* Copyright (c) 2015 Research Organization for Information Science
|
|
|
|
* and Technology (RIST). All rights reserved.
|
2015-02-11 23:47:56 +03:00
|
|
|
* Copyright (c) 2015 NVIDIA Corporation. All rights reserved.
|
2005-07-01 01:28:35 +04:00
|
|
|
* $COPYRIGHT$
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* Additional copyrights may follow
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef MCA_BTL_IB_ENDPOINT_H
|
|
|
|
#define MCA_BTL_IB_ENDPOINT_H
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include <errno.h>
|
|
|
|
#include <string.h>
|
2005-07-03 20:22:16 +04:00
|
|
|
#include "opal/class/opal_list.h"
|
Update libevent to the 2.0 series, currently at 2.0.7rc. We will update to their final release when it becomes available. Currently known errors exist in unused portions of the libevent code. This revision passes the IBM test suite on a Linux machine and on a standalone Mac.
This is a fairly intrusive change, but outside of the moving of opal/event to opal/mca/event, the only changes involved (a) changing all calls to opal_event functions to reflect the new framework instead, and (b) ensuring that all opal_event_t objects are properly constructed since they are now true opal_objects.
Note: Shiqing has just returned from vacation and has not yet had a chance to complete the Windows integration. Thus, this commit almost certainly breaks Windows support on the trunk. However, I want this to have a chance to soak for as long as possible before I become less available a week from today (going to be at a class for 5 days, and thus will only be sparingly available) so we can find and fix any problems.
Biggest change is moving the libevent code from opal/event to a new opal/mca/event framework. This was done to make it much easier to update libevent in the future. New versions can be inserted as a new component and tested in parallel with the current version until validated, then we can remove the earlier version if we so choose. This is a statically built framework ala installdirs, so only one component will build at a time. There is no selection logic - the sole compiled component simply loads its function pointers into the opal_event struct.
I have gone thru the code base and converted all the libevent calls I could find. However, I cannot compile nor test every environment. It is therefore quite likely that errors remain in the system. Please keep an eye open for two things:
1. compile-time errors: these will be obvious as calls to the old functions (e.g., opal_evtimer_new) must be replaced by the new framework APIs (e.g., opal_event.evtimer_new)
2. run-time errors: these will likely show up as segfaults due to missing constructors on opal_event_t objects. It appears that it became a typical practice for people to "init" an opal_event_t by simply using memset to zero it out. This will no longer work - you must either OBJ_NEW or OBJ_CONSTRUCT an opal_event_t. I tried to catch these cases, but may have missed some. Believe me, you'll know when you hit it.
There is also the issue of the new libevent "no recursion" behavior. As I described on a recent email, we will have to discuss this and figure out what, if anything, we need to do.
This commit was SVN r23925.
2010-10-24 22:35:54 +04:00
|
|
|
#include "opal/mca/event/event.h"
|
2009-02-14 18:21:28 +03:00
|
|
|
#include "opal/util/output.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/btl/btl.h"
|
|
|
|
#include "opal/mca/btl/base/btl_base_error.h"
|
2005-07-01 01:28:35 +04:00
|
|
|
#include "btl_openib.h"
|
2007-12-09 17:05:13 +03:00
|
|
|
#include "btl_openib_frag.h"
|
2006-03-26 12:30:50 +04:00
|
|
|
#include "btl_openib_eager_rdma.h"
|
2007-11-28 10:16:52 +03:00
|
|
|
#include "connect/base.h"
|
2005-07-15 19:13:19 +04:00
|
|
|
|
2012-12-26 14:19:12 +04:00
|
|
|
#define QP_TX_BATCH_COUNT 64
|
|
|
|
|
2014-12-09 12:43:15 +03:00
|
|
|
#define QP_TX_BATCH_COUNT 64
|
|
|
|
|
2007-08-07 03:40:35 +04:00
|
|
|
BEGIN_C_DECLS
|
2005-10-01 02:58:09 +04:00
|
|
|
|
|
|
|
struct mca_btl_openib_frag_t;
|
2008-05-02 15:52:33 +04:00
|
|
|
struct mca_btl_openib_proc_modex_t;
|
2005-10-01 02:58:09 +04:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
/**
|
|
|
|
* State of IB endpoint connection.
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef enum {
|
|
|
|
/* Defines the state in which this BTL instance
|
|
|
|
* has started the process of connection */
|
|
|
|
MCA_BTL_IB_CONNECTING,
|
|
|
|
|
|
|
|
/* Waiting for ack from endpoint */
|
|
|
|
MCA_BTL_IB_CONNECT_ACK,
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/*Waiting for final connection ACK from endpoint */
|
2008-01-21 15:11:18 +03:00
|
|
|
MCA_BTL_IB_WAITING_ACK,
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
/* Connected ... both sender & receiver have
|
|
|
|
* buffers associated with this connection */
|
|
|
|
MCA_BTL_IB_CONNECTED,
|
|
|
|
|
|
|
|
/* Connection is closed, there are no resources
|
|
|
|
* associated with this */
|
|
|
|
MCA_BTL_IB_CLOSED,
|
|
|
|
|
|
|
|
/* Maximum number of retries have been used.
|
|
|
|
* Report failure on send to upper layer */
|
|
|
|
MCA_BTL_IB_FAILED
|
|
|
|
} mca_btl_openib_endpoint_state_t;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
typedef struct mca_btl_openib_rem_qp_info_t {
|
2008-01-21 15:11:18 +03:00
|
|
|
uint32_t rem_qp_num;
|
|
|
|
/* Remote QP number */
|
|
|
|
uint32_t rem_psn;
|
|
|
|
/* Remote processes port sequence number */
|
2008-05-02 15:52:33 +04:00
|
|
|
} mca_btl_openib_rem_qp_info_t;
|
2005-10-01 02:58:09 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
typedef struct mca_btl_openib_rem_srq_info_t {
|
2007-11-28 10:18:59 +03:00
|
|
|
/* Remote SRQ number */
|
|
|
|
uint32_t rem_srq_num;
|
2008-05-02 15:52:33 +04:00
|
|
|
} mca_btl_openib_rem_srq_info_t;
|
2007-11-28 10:18:59 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
typedef struct mca_btl_openib_rem_info_t {
|
2005-10-01 02:58:09 +04:00
|
|
|
/* Local identifier of the remote process */
|
2008-05-02 15:52:33 +04:00
|
|
|
uint16_t rem_lid;
|
2008-01-21 15:11:18 +03:00
|
|
|
/* subnet id of remote process */
|
2008-05-02 15:52:33 +04:00
|
|
|
uint64_t rem_subnet_id;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* MTU of remote process */
|
2008-05-02 15:52:33 +04:00
|
|
|
uint32_t rem_mtu;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* index of remote endpoint in endpoint array */
|
2008-05-02 15:52:33 +04:00
|
|
|
uint32_t rem_index;
|
|
|
|
/* Remote QPs */
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
mca_btl_openib_rem_qp_info_t *rem_qps;
|
2008-05-02 15:52:33 +04:00
|
|
|
/* Remote xrc_srq info, used only with XRC connections */
|
2007-11-28 10:18:59 +03:00
|
|
|
mca_btl_openib_rem_srq_info_t *rem_srqs;
|
2009-12-15 17:25:07 +03:00
|
|
|
/* Vendor id of remote HCA */
|
|
|
|
uint32_t rem_vendor_id;
|
|
|
|
/* Vendor part id of remote HCA */
|
|
|
|
uint32_t rem_vendor_part_id;
|
|
|
|
/* Transport type of remote port */
|
|
|
|
mca_btl_openib_transport_type_t rem_transport_type;
|
2008-05-02 15:52:33 +04:00
|
|
|
} mca_btl_openib_rem_info_t;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
|
|
|
|
|
|
|
/**
|
2008-01-21 15:11:18 +03:00
|
|
|
* Agggregates all per peer qp info for an endpoint
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
*/
|
2008-05-02 15:52:33 +04:00
|
|
|
typedef struct mca_btl_openib_endpoint_pp_qp_t {
|
2008-01-21 15:11:18 +03:00
|
|
|
int32_t sd_credits; /**< this rank's view of the credits
|
|
|
|
* available for sending:
|
|
|
|
* this is the credits granted by the
|
|
|
|
* remote peer which has some relation to the
|
|
|
|
* number of receive buffers posted remotely
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
*/
|
|
|
|
int32_t rd_posted; /**< number of descriptors posted to the nic*/
|
|
|
|
int32_t rd_credits; /**< number of credits to return to peer */
|
2007-07-24 19:19:51 +04:00
|
|
|
int32_t cm_received; /**< Credit messages received */
|
|
|
|
int32_t cm_return; /**< how may credits to return */
|
|
|
|
int32_t cm_sent; /**< Outstanding number of credit messages */
|
2008-05-02 15:52:33 +04:00
|
|
|
} mca_btl_openib_endpoint_pp_qp_t;
|
2005-10-01 02:58:09 +04:00
|
|
|
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/**
|
2008-01-21 15:11:18 +03:00
|
|
|
* Aggregates all srq qp info for an endpoint
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
*/
|
2008-05-02 15:52:33 +04:00
|
|
|
typedef struct mca_btl_openib_endpoint_srq_qp_t {
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
int32_t dummy;
|
2008-05-02 15:52:33 +04:00
|
|
|
} mca_btl_openib_endpoint_srq_qp_t;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
typedef struct mca_btl_openib_qp_t {
|
|
|
|
struct ibv_qp *lcl_qp;
|
|
|
|
uint32_t lcl_psn;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
int32_t sd_wqe; /**< number of available send wqe entries */
|
2012-12-26 14:19:12 +04:00
|
|
|
int32_t sd_wqe_inflight;
|
|
|
|
int wqe_count;
|
2007-11-28 10:15:20 +03:00
|
|
|
int users;
|
|
|
|
opal_mutex_t lock;
|
|
|
|
} mca_btl_openib_qp_t;
|
|
|
|
|
|
|
|
typedef struct mca_btl_openib_endpoint_qp_t {
|
|
|
|
mca_btl_openib_qp_t *qp;
|
2009-01-07 17:41:20 +03:00
|
|
|
opal_list_t no_credits_pending_frags[2]; /**< put fragment here if there is no credits
|
2007-11-28 10:15:20 +03:00
|
|
|
available */
|
2009-01-07 17:10:58 +03:00
|
|
|
opal_list_t no_wqe_pending_frags[2]; /**< put fragments here if there is no wqe
|
|
|
|
available */
|
2007-08-27 15:34:59 +04:00
|
|
|
int32_t rd_credit_send_lock; /**< Lock credit send fragment */
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_control_frag_t *credit_frag;
|
2008-06-19 12:40:39 +04:00
|
|
|
size_t ib_inline_max; /**< max size of inline send*/
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
union {
|
|
|
|
mca_btl_openib_endpoint_srq_qp_t srq_qp;
|
|
|
|
mca_btl_openib_endpoint_pp_qp_t pp_qp;
|
|
|
|
} u;
|
2007-11-28 10:15:20 +03:00
|
|
|
} mca_btl_openib_endpoint_qp_t;
|
2005-10-01 02:58:09 +04:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
/**
|
|
|
|
* An abstraction that represents a connection to a endpoint process.
|
|
|
|
* An instance of mca_btl_base_endpoint_t is associated w/ each process
|
|
|
|
* and BTL pair at startup. However, connections to the endpoint
|
|
|
|
* are established dynamically on an as-needed basis:
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct mca_btl_base_endpoint_t {
|
2008-04-29 17:22:40 +04:00
|
|
|
opal_list_item_t super;
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** BTL module that created this connection */
|
2005-07-01 01:28:35 +04:00
|
|
|
struct mca_btl_openib_module_t* endpoint_btl;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** proc structure corresponding to endpoint */
|
2005-07-01 01:28:35 +04:00
|
|
|
struct mca_btl_openib_proc_t* endpoint_proc;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** local CPC to connect to this endpoint */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_module_t *endpoint_local_cpc;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/** hook for local CPC to hang endpoint-specific data */
|
|
|
|
void *endpoint_local_cpc_data;
|
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
/** If endpoint_local_cpc->cbm_uses_cts is true and this endpoint
|
|
|
|
is iWARP, then endpoint_initiator must be true on the side
|
|
|
|
that actually initiates the QP, false on the other side. This
|
|
|
|
bool is used to know which way to send the first CTS
|
|
|
|
message. */
|
|
|
|
bool endpoint_initiator;
|
|
|
|
|
|
|
|
/** pointer to remote proc's CPC data (essentially its CPC modex
|
|
|
|
message) */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_module_data_t *endpoint_remote_cpc_data;
|
2008-05-02 15:52:33 +04:00
|
|
|
|
|
|
|
/** current state of the connection */
|
2005-07-01 01:28:35 +04:00
|
|
|
mca_btl_openib_endpoint_state_t endpoint_state;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** number of connection retries attempted */
|
2005-07-01 01:28:35 +04:00
|
|
|
size_t endpoint_retries;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** timestamp of when the first connection was attempted */
|
2005-07-01 01:28:35 +04:00
|
|
|
double endpoint_tstamp;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** lock for concurrent access to endpoint state */
|
2005-10-21 06:21:45 +04:00
|
|
|
opal_mutex_t endpoint_lock;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** list of pending frags due to lazy connection establishment
|
|
|
|
for this endpotint */
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
opal_list_t pending_lazy_frags;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
mca_btl_openib_endpoint_qp_t *qps;
|
2015-01-07 07:27:25 +03:00
|
|
|
#if OPAL_HAVE_CONNECTX_XRC_DOMAINS
|
2014-12-09 12:43:15 +03:00
|
|
|
struct ibv_qp *xrc_recv_qp;
|
|
|
|
#else
|
2008-02-04 17:03:38 +03:00
|
|
|
uint32_t xrc_recv_qp_num; /* in xrc we will use it as recv qp */
|
2014-12-09 12:43:15 +03:00
|
|
|
#endif
|
2008-01-21 15:11:18 +03:00
|
|
|
uint32_t xrc_recv_psn;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** list of pending rget ops */
|
|
|
|
opal_list_t pending_get_frags;
|
|
|
|
/** list of pending rput ops */
|
2011-07-04 18:00:41 +04:00
|
|
|
opal_list_t pending_put_frags;
|
2007-11-28 10:19:36 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** number of available get tokens */
|
|
|
|
int32_t get_tokens;
|
2006-01-13 02:42:44 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** subnet id of this endpoint*/
|
|
|
|
uint64_t subnet_id;
|
|
|
|
/** used only for xrc; pointer to struct that keeps remote port
|
|
|
|
info */
|
|
|
|
struct ib_address_t *ib_addr;
|
2006-03-26 12:30:50 +04:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** number of eager received */
|
|
|
|
int32_t eager_recv_count;
|
|
|
|
/** info about remote RDMA buffer */
|
2006-03-26 12:30:50 +04:00
|
|
|
mca_btl_openib_eager_rdma_remote_t eager_rdma_remote;
|
2008-05-02 15:52:33 +04:00
|
|
|
/** info about local RDMA buffer */
|
2006-03-26 12:30:50 +04:00
|
|
|
mca_btl_openib_eager_rdma_local_t eager_rdma_local;
|
2008-05-02 15:52:33 +04:00
|
|
|
/** index of the endpoint in endpoints array */
|
|
|
|
int32_t index;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** does the endpoint require network byte ordering? */
|
|
|
|
bool nbo;
|
|
|
|
/** use eager rdma for this peer? */
|
|
|
|
bool use_eager_rdma;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
/** information about the remote port */
|
2008-01-21 15:11:18 +03:00
|
|
|
mca_btl_openib_rem_info_t rem_info;
|
2008-10-06 04:46:02 +04:00
|
|
|
|
|
|
|
/** Frag for initial wireup CTS protocol; will be NULL if CPC
|
|
|
|
indicates that it does not want to use CTS */
|
|
|
|
mca_btl_openib_recv_frag_t endpoint_cts_frag;
|
|
|
|
/** Memory registration info for the CTS frag */
|
|
|
|
struct ibv_mr *endpoint_cts_mr;
|
|
|
|
|
|
|
|
/** Whether we've posted receives on this EP or not (only used in
|
|
|
|
CTS protocol) */
|
|
|
|
bool endpoint_posted_recvs;
|
|
|
|
|
|
|
|
/** Whether we've received the CTS from the peer or not (only used
|
|
|
|
in CTS protocol) */
|
|
|
|
bool endpoint_cts_received;
|
|
|
|
|
|
|
|
/** Whether we've send out CTS to the peer or not (only used in
|
|
|
|
CTS protocol) */
|
|
|
|
bool endpoint_cts_sent;
|
2005-07-01 01:28:35 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct mca_btl_base_endpoint_t mca_btl_base_endpoint_t;
|
|
|
|
typedef mca_btl_base_endpoint_t mca_btl_openib_endpoint_t;
|
|
|
|
|
2006-08-24 20:38:08 +04:00
|
|
|
OBJ_CLASS_DECLARATION(mca_btl_openib_endpoint_t);
|
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
static inline int32_t qp_get_wqe(mca_btl_openib_endpoint_t *ep, const int qp)
|
|
|
|
{
|
|
|
|
return OPAL_THREAD_ADD32(&ep->qps[qp].qp->sd_wqe, -1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int32_t qp_put_wqe(mca_btl_openib_endpoint_t *ep, const int qp)
|
|
|
|
{
|
|
|
|
return OPAL_THREAD_ADD32(&ep->qps[qp].qp->sd_wqe, 1);
|
|
|
|
}
|
|
|
|
|
2012-12-26 14:19:12 +04:00
|
|
|
|
|
|
|
static inline int32_t qp_inc_inflight_wqe(mca_btl_openib_endpoint_t *ep, const int qp, mca_btl_openib_com_frag_t *frag)
|
|
|
|
{
|
|
|
|
frag->n_wqes_inflight = 0;
|
|
|
|
return OPAL_THREAD_ADD32(&ep->qps[qp].qp->sd_wqe_inflight, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void qp_inflight_wqe_to_frag(mca_btl_openib_endpoint_t *ep, const int qp, mca_btl_openib_com_frag_t *frag)
|
|
|
|
{
|
|
|
|
|
|
|
|
frag->n_wqes_inflight = ep->qps[qp].qp->sd_wqe_inflight;
|
|
|
|
ep->qps[qp].qp->sd_wqe_inflight = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int qp_frag_to_wqe(mca_btl_openib_endpoint_t *ep, const int qp, mca_btl_openib_com_frag_t *frag)
|
|
|
|
{
|
|
|
|
int n;
|
|
|
|
n = frag->n_wqes_inflight;
|
|
|
|
OPAL_THREAD_ADD32(&ep->qps[qp].qp->sd_wqe, n);
|
|
|
|
frag->n_wqes_inflight = 0;
|
|
|
|
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int qp_need_signal(mca_btl_openib_endpoint_t *ep, const int qp, size_t size, int rdma)
|
|
|
|
{
|
|
|
|
|
|
|
|
/* note that size here is payload only */
|
|
|
|
if (ep->qps[qp].qp->sd_wqe <= 0 ||
|
2013-01-08 18:00:29 +04:00
|
|
|
size + sizeof(mca_btl_openib_header_t) + (rdma ? sizeof(mca_btl_openib_footer_t) : 0) > ep->qps[qp].ib_inline_max ||
|
2013-01-13 14:11:03 +04:00
|
|
|
(!BTL_OPENIB_QP_TYPE_PP(qp) && ep->endpoint_btl->qps[qp].u.srq_qp.sd_credits <= 0)) {
|
2012-12-26 14:19:12 +04:00
|
|
|
ep->qps[qp].qp->wqe_count = QP_TX_BATCH_COUNT;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (0 < --ep->qps[qp].qp->wqe_count) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ep->qps[qp].qp->wqe_count = QP_TX_BATCH_COUNT;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void qp_reset_signal_count(mca_btl_openib_endpoint_t *ep, const int qp)
|
|
|
|
{
|
|
|
|
ep->qps[qp].qp->wqe_count = QP_TX_BATCH_COUNT;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2014-12-09 12:43:15 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
int mca_btl_openib_endpoint_send(mca_btl_base_endpoint_t*,
|
|
|
|
mca_btl_openib_send_frag_t*);
|
|
|
|
int mca_btl_openib_endpoint_post_send(mca_btl_openib_endpoint_t*,
|
|
|
|
mca_btl_openib_send_frag_t*);
|
2006-09-12 13:17:59 +04:00
|
|
|
void mca_btl_openib_endpoint_send_credits(mca_btl_base_endpoint_t*, const int);
|
2006-03-26 12:30:50 +04:00
|
|
|
void mca_btl_openib_endpoint_connect_eager_rdma(mca_btl_openib_endpoint_t*);
|
2007-11-28 10:12:44 +03:00
|
|
|
int mca_btl_openib_endpoint_post_recvs(mca_btl_openib_endpoint_t*);
|
2008-10-06 04:46:02 +04:00
|
|
|
void mca_btl_openib_endpoint_send_cts(mca_btl_openib_endpoint_t *endpoint);
|
|
|
|
void mca_btl_openib_endpoint_cpc_complete(mca_btl_openib_endpoint_t*);
|
2007-11-28 10:12:44 +03:00
|
|
|
void mca_btl_openib_endpoint_connected(mca_btl_openib_endpoint_t*);
|
2007-11-28 10:21:07 +03:00
|
|
|
void mca_btl_openib_endpoint_init(mca_btl_openib_module_t*,
|
2008-05-02 15:52:33 +04:00
|
|
|
mca_btl_base_endpoint_t*,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_module_t *local_cpc,
|
2008-05-02 15:52:33 +04:00
|
|
|
struct mca_btl_openib_proc_modex_t *remote_proc_info,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_btl_openib_connect_base_module_data_t *remote_cpc_data);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2008-06-28 02:48:45 +04:00
|
|
|
/*
|
|
|
|
* Invoke an error on the btl associated with an endpoint. If we
|
|
|
|
* don't have an endpoint, then just use the first one on the
|
|
|
|
* component list of BTLs.
|
|
|
|
*/
|
|
|
|
void *mca_btl_openib_endpoint_invoke_error(void *endpoint);
|
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
static inline int post_recvs(mca_btl_base_endpoint_t *ep, const int qp,
|
|
|
|
const int num_post)
|
|
|
|
{
|
2008-05-02 15:52:33 +04:00
|
|
|
int i, rc;
|
2007-11-28 17:57:15 +03:00
|
|
|
struct ibv_recv_wr *bad_wr, *wr_list = NULL, *wr = NULL;
|
2007-11-28 10:13:34 +03:00
|
|
|
mca_btl_openib_module_t *openib_btl = ep->endpoint_btl;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 17:57:15 +03:00
|
|
|
if(0 == num_post)
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2007-11-28 10:13:34 +03:00
|
|
|
|
|
|
|
for(i = 0; i < num_post; i++) {
|
2007-11-28 17:57:15 +03:00
|
|
|
ompi_free_list_item_t* item;
|
2013-07-09 02:07:52 +04:00
|
|
|
OMPI_FREE_LIST_WAIT_MT(&openib_btl->device->qps[qp].recv_free, item);
|
2007-11-28 17:57:15 +03:00
|
|
|
to_base_frag(item)->base.order = qp;
|
|
|
|
to_com_frag(item)->endpoint = ep;
|
|
|
|
if(NULL == wr)
|
|
|
|
wr = wr_list = &to_recv_frag(item)->rd_desc;
|
|
|
|
else
|
|
|
|
wr = wr->next = &to_recv_frag(item)->rd_desc;
|
2008-10-06 04:46:02 +04:00
|
|
|
OPAL_OUTPUT((-1, "Posting recv (QP num %d): WR ID %p, SG addr %p, len %d, lkey %d",
|
|
|
|
ep->qps[qp].qp->lcl_qp->qp_num,
|
2014-01-20 19:44:45 +04:00
|
|
|
(void*) ((uintptr_t*)wr->wr_id),
|
|
|
|
(void*)((uintptr_t*) wr->sg_list[0].addr),
|
2008-10-06 04:46:02 +04:00
|
|
|
wr->sg_list[0].length,
|
|
|
|
wr->sg_list[0].lkey));
|
2007-11-28 10:13:34 +03:00
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 17:57:15 +03:00
|
|
|
wr->next = NULL;
|
|
|
|
|
2008-05-02 15:52:33 +04:00
|
|
|
rc = ibv_post_recv(ep->qps[qp].qp->lcl_qp, wr_list, &bad_wr);
|
|
|
|
if (0 == rc)
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2007-11-28 17:57:15 +03:00
|
|
|
|
2008-10-06 04:46:02 +04:00
|
|
|
BTL_ERROR(("error %d posting receive on qp %d", rc, qp));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2007-11-28 10:13:34 +03:00
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2008-01-09 13:39:35 +03:00
|
|
|
static inline int mca_btl_openib_endpoint_post_rr_nolock(
|
2007-11-28 17:57:15 +03:00
|
|
|
mca_btl_base_endpoint_t *ep, const int qp)
|
2006-09-07 17:05:41 +04:00
|
|
|
{
|
2007-07-24 19:19:51 +04:00
|
|
|
int rd_rsv = mca_btl_openib_component.qp_infos[qp].u.pp_qp.rd_rsv;
|
2007-07-25 18:51:19 +04:00
|
|
|
int rd_num = mca_btl_openib_component.qp_infos[qp].rd_num;
|
2007-11-28 17:57:15 +03:00
|
|
|
int rd_low = mca_btl_openib_component.qp_infos[qp].rd_low;
|
2007-11-28 10:13:34 +03:00
|
|
|
int cqp = mca_btl_openib_component.credits_qp, rc;
|
2007-11-28 17:57:15 +03:00
|
|
|
int cm_received = 0, num_post = 0;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-09-30 20:14:17 +04:00
|
|
|
assert(BTL_OPENIB_QP_TYPE_PP(qp));
|
2007-11-28 10:13:34 +03:00
|
|
|
|
2007-11-28 17:57:15 +03:00
|
|
|
if(ep->qps[qp].u.pp_qp.rd_posted <= rd_low)
|
|
|
|
num_post = rd_num - ep->qps[qp].u.pp_qp.rd_posted;
|
|
|
|
|
|
|
|
assert(num_post >= 0);
|
|
|
|
|
|
|
|
if(ep->qps[qp].u.pp_qp.cm_received >= (rd_rsv >> 2))
|
|
|
|
cm_received = ep->qps[qp].u.pp_qp.cm_received;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if((rc = post_recvs(ep, qp, num_post)) != OPAL_SUCCESS) {
|
2007-11-28 17:57:15 +03:00
|
|
|
return rc;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
2007-11-28 17:57:15 +03:00
|
|
|
OPAL_THREAD_ADD32(&ep->qps[qp].u.pp_qp.rd_posted, num_post);
|
|
|
|
OPAL_THREAD_ADD32(&ep->qps[qp].u.pp_qp.rd_credits, num_post);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
/* post buffers for credit management on credit management qp */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if((rc = post_recvs(ep, cqp, cm_received)) != OPAL_SUCCESS) {
|
2007-11-28 17:57:15 +03:00
|
|
|
return rc;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
2007-11-28 17:57:15 +03:00
|
|
|
OPAL_THREAD_ADD32(&ep->qps[qp].u.pp_qp.cm_return, cm_received);
|
|
|
|
OPAL_THREAD_ADD32(&ep->qps[qp].u.pp_qp.cm_received, -cm_received);
|
2007-11-28 10:13:34 +03:00
|
|
|
|
2007-11-28 17:57:15 +03:00
|
|
|
assert(ep->qps[qp].u.pp_qp.rd_credits <= rd_num &&
|
|
|
|
ep->qps[qp].u.pp_qp.rd_credits >= 0);
|
2007-11-28 10:13:34 +03:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2005-07-20 19:17:18 +04:00
|
|
|
}
|
|
|
|
|
2008-01-09 13:39:35 +03:00
|
|
|
static inline int mca_btl_openib_endpoint_post_rr(
|
|
|
|
mca_btl_base_endpoint_t *ep, const int qp)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
ret = mca_btl_openib_endpoint_post_rr_nolock(ep, qp);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2007-08-27 15:34:59 +04:00
|
|
|
#define BTL_OPENIB_CREDITS_SEND_TRYLOCK(E, Q) \
|
|
|
|
OPAL_ATOMIC_CMPSET_32(&(E)->qps[(Q)].rd_credit_send_lock, 0, 1)
|
|
|
|
#define BTL_OPENIB_CREDITS_SEND_UNLOCK(E, Q) \
|
|
|
|
OPAL_ATOMIC_CMPSET_32(&(E)->qps[(Q)].rd_credit_send_lock, 1, 0)
|
2009-03-25 19:53:26 +03:00
|
|
|
#define BTL_OPENIB_GET_CREDITS(FROM, TO) \
|
|
|
|
do { \
|
|
|
|
TO = FROM; \
|
|
|
|
} while(0 == OPAL_ATOMIC_CMPSET_32(&FROM, TO, 0))
|
|
|
|
|
2007-08-27 15:34:59 +04:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
static inline bool check_eager_rdma_credits(const mca_btl_openib_endpoint_t *ep)
|
2006-09-12 13:17:59 +04:00
|
|
|
{
|
2007-11-28 10:13:34 +03:00
|
|
|
return (ep->eager_rdma_local.credits > ep->eager_rdma_local.rd_win) ? true :
|
|
|
|
false;
|
|
|
|
}
|
2006-09-12 13:17:59 +04:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
static inline bool
|
|
|
|
check_send_credits(const mca_btl_openib_endpoint_t *ep, const int qp)
|
|
|
|
{
|
|
|
|
|
|
|
|
if(!BTL_OPENIB_QP_TYPE_PP(qp))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return (ep->qps[qp].u.pp_qp.rd_credits >=
|
|
|
|
mca_btl_openib_component.qp_infos[qp].u.pp_qp.rd_win) ? true : false;
|
2007-08-27 15:34:59 +04:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
static inline void send_credits(mca_btl_openib_endpoint_t *ep, int qp)
|
2007-08-27 15:34:59 +04:00
|
|
|
{
|
2007-11-28 10:13:34 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
if(check_send_credits(ep, qp))
|
|
|
|
goto try_send;
|
|
|
|
} else {
|
|
|
|
qp = mca_btl_openib_component.credits_qp;
|
|
|
|
}
|
|
|
|
|
|
|
|
if(!check_eager_rdma_credits(ep))
|
|
|
|
return;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
try_send:
|
|
|
|
if(BTL_OPENIB_CREDITS_SEND_TRYLOCK(ep, qp))
|
|
|
|
mca_btl_openib_endpoint_send_credits(ep, qp);
|
2006-09-12 13:17:59 +04:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:16:52 +03:00
|
|
|
static inline int check_endpoint_state(mca_btl_openib_endpoint_t *ep,
|
|
|
|
mca_btl_base_descriptor_t *des, opal_list_t *pending_list)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int rc = OPAL_ERR_RESOURCE_BUSY;
|
2007-11-28 10:16:52 +03:00
|
|
|
|
|
|
|
switch(ep->endpoint_state) {
|
|
|
|
case MCA_BTL_IB_CLOSED:
|
2008-05-02 15:52:33 +04:00
|
|
|
rc = ep->endpoint_local_cpc->cbm_start_connect(ep->endpoint_local_cpc, ep);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS == rc) {
|
|
|
|
rc = OPAL_ERR_RESOURCE_BUSY;
|
2008-05-02 15:52:33 +04:00
|
|
|
}
|
2007-11-28 10:16:52 +03:00
|
|
|
/*
|
|
|
|
* As long as we expect a message from the peer (in order
|
|
|
|
* to setup the connection) let the event engine pool the
|
|
|
|
* OOB events. Note: we increment it once peer active
|
|
|
|
* connection.
|
|
|
|
*/
|
|
|
|
opal_progress_event_users_increment();
|
|
|
|
/* fall through */
|
|
|
|
default:
|
|
|
|
opal_list_append(pending_list, (opal_list_item_t *)des);
|
|
|
|
break;
|
|
|
|
case MCA_BTL_IB_FAILED:
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERR_UNREACH;
|
2007-11-28 10:16:52 +03:00
|
|
|
break;
|
|
|
|
case MCA_BTL_IB_CONNECTED:
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_SUCCESS;
|
2007-11-28 10:16:52 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2008-01-07 13:19:07 +03:00
|
|
|
static inline __opal_attribute_always_inline__ int
|
2012-12-26 14:19:12 +04:00
|
|
|
ib_send_flags(uint32_t size, mca_btl_openib_endpoint_qp_t *qp, int do_signal)
|
2008-01-07 13:19:07 +03:00
|
|
|
{
|
2012-12-26 14:19:12 +04:00
|
|
|
if (do_signal) {
|
|
|
|
return IBV_SEND_SIGNALED |
|
|
|
|
((size <= qp->ib_inline_max) ? IBV_SEND_INLINE : 0);
|
|
|
|
} else {
|
|
|
|
return ((size <= qp->ib_inline_max) ? IBV_SEND_INLINE : 0);
|
|
|
|
}
|
2008-01-07 13:19:07 +03:00
|
|
|
}
|
2009-03-25 19:53:26 +03:00
|
|
|
|
|
|
|
static inline int
|
|
|
|
acquire_eager_rdma_send_credit(mca_btl_openib_endpoint_t *endpoint)
|
|
|
|
{
|
|
|
|
if(OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, -1) < 0) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, 1);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int post_send(mca_btl_openib_endpoint_t *ep,
|
2012-12-26 14:19:12 +04:00
|
|
|
mca_btl_openib_send_frag_t *frag, const bool rdma, int do_signal)
|
2009-03-25 19:53:26 +03:00
|
|
|
{
|
|
|
|
mca_btl_openib_module_t *openib_btl = ep->endpoint_btl;
|
2012-06-21 21:09:12 +04:00
|
|
|
mca_btl_openib_segment_t *seg = &to_base_frag(frag)->segment;
|
2009-03-25 19:53:26 +03:00
|
|
|
struct ibv_sge *sg = &to_com_frag(frag)->sg_entry;
|
|
|
|
struct ibv_send_wr *sr_desc = &to_out_frag(frag)->sr_desc;
|
|
|
|
struct ibv_send_wr *bad_wr;
|
|
|
|
int qp = to_base_frag(frag)->base.order;
|
|
|
|
|
2012-06-21 21:09:12 +04:00
|
|
|
sg->length = seg->base.seg_len + sizeof(mca_btl_openib_header_t) +
|
2009-03-25 19:53:26 +03:00
|
|
|
(rdma ? sizeof(mca_btl_openib_footer_t) : 0) + frag->coalesced_length;
|
|
|
|
|
2012-12-26 14:19:12 +04:00
|
|
|
sr_desc->send_flags = ib_send_flags(sg->length, &(ep->qps[qp]), do_signal);
|
2009-03-25 19:53:26 +03:00
|
|
|
|
|
|
|
if(ep->nbo)
|
|
|
|
BTL_OPENIB_HEADER_HTON(*frag->hdr);
|
|
|
|
|
|
|
|
if(rdma) {
|
2012-03-01 21:29:40 +04:00
|
|
|
int32_t head;
|
2009-03-25 19:53:26 +03:00
|
|
|
mca_btl_openib_footer_t* ftr =
|
2012-03-01 21:29:40 +04:00
|
|
|
(mca_btl_openib_footer_t*)(((char*)frag->hdr) + sg->length +
|
|
|
|
BTL_OPENIB_FTR_PADDING(sg->length) - sizeof(mca_btl_openib_footer_t));
|
2009-03-25 19:53:26 +03:00
|
|
|
sr_desc->opcode = IBV_WR_RDMA_WRITE;
|
|
|
|
MCA_BTL_OPENIB_RDMA_FRAG_SET_SIZE(ftr, sg->length);
|
|
|
|
MCA_BTL_OPENIB_RDMA_MAKE_LOCAL(ftr);
|
2009-05-07 00:11:28 +04:00
|
|
|
#if OPAL_ENABLE_DEBUG
|
2012-03-01 21:29:40 +04:00
|
|
|
do {
|
|
|
|
ftr->seq = ep->eager_rdma_remote.seq;
|
|
|
|
} while (!OPAL_ATOMIC_CMPSET_32((int32_t*) &ep->eager_rdma_remote.seq,
|
|
|
|
(int32_t) ftr->seq,
|
2010-07-20 16:23:00 +04:00
|
|
|
(int32_t) (ftr->seq+1)));
|
2009-03-25 19:53:26 +03:00
|
|
|
#endif
|
|
|
|
if(ep->nbo)
|
|
|
|
BTL_OPENIB_FOOTER_HTON(*ftr);
|
|
|
|
|
|
|
|
sr_desc->wr.rdma.rkey = ep->eager_rdma_remote.rkey;
|
|
|
|
MCA_BTL_OPENIB_RDMA_MOVE_INDEX(ep->eager_rdma_remote.head, head);
|
2011-01-19 23:58:22 +03:00
|
|
|
#if BTL_OPENIB_FAILOVER_ENABLED
|
2010-07-14 14:08:19 +04:00
|
|
|
/* frag->ftr is unused on the sending fragment, so use it
|
|
|
|
* to indicate it is an eager fragment. A non-zero value
|
|
|
|
* indicates it is eager, and the value indicates the
|
|
|
|
* location in the eager RDMA array that it lives. */
|
|
|
|
frag->ftr = (mca_btl_openib_footer_t*)(long)(1 + head);
|
|
|
|
#endif
|
2009-03-25 19:53:26 +03:00
|
|
|
sr_desc->wr.rdma.remote_addr =
|
|
|
|
ep->eager_rdma_remote.base.lval +
|
|
|
|
head * openib_btl->eager_rdma_frag_size +
|
|
|
|
sizeof(mca_btl_openib_header_t) +
|
|
|
|
mca_btl_openib_component.eager_limit +
|
|
|
|
sizeof(mca_btl_openib_footer_t);
|
2012-03-01 21:29:40 +04:00
|
|
|
sr_desc->wr.rdma.remote_addr -= sg->length + BTL_OPENIB_FTR_PADDING(sg->length);
|
2009-03-25 19:53:26 +03:00
|
|
|
} else {
|
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
sr_desc->opcode = IBV_WR_SEND;
|
|
|
|
} else {
|
|
|
|
sr_desc->opcode = IBV_WR_SEND_WITH_IMM;
|
2009-05-07 00:11:28 +04:00
|
|
|
#if !defined(WORDS_BIGENDIAN) && OPAL_ENABLE_HETEROGENEOUS_SUPPORT
|
2009-03-25 19:53:26 +03:00
|
|
|
sr_desc->imm_data = htonl(ep->rem_info.rem_index);
|
|
|
|
#else
|
|
|
|
sr_desc->imm_data = ep->rem_info.rem_index;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#if HAVE_XRC
|
2015-01-07 07:27:25 +03:00
|
|
|
#if OPAL_HAVE_CONNECTX_XRC_DOMAINS
|
2014-12-09 12:43:15 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_XRC(qp))
|
|
|
|
sr_desc->qp_type.xrc.remote_srqn = ep->rem_info.rem_srqs[qp].rem_srq_num;
|
|
|
|
#else
|
2009-03-25 19:53:26 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_XRC(qp))
|
|
|
|
sr_desc->xrc_remote_srq_num = ep->rem_info.rem_srqs[qp].rem_srq_num;
|
2014-12-09 12:43:15 +03:00
|
|
|
#endif
|
2009-03-25 19:53:26 +03:00
|
|
|
#endif
|
|
|
|
assert(sg->addr == (uint64_t)(uintptr_t)frag->hdr);
|
|
|
|
|
2012-12-26 14:19:12 +04:00
|
|
|
if (sr_desc->send_flags & IBV_SEND_SIGNALED) {
|
|
|
|
qp_inflight_wqe_to_frag(ep, qp, to_com_frag(frag));
|
|
|
|
} else {
|
|
|
|
qp_inc_inflight_wqe(ep, qp, to_com_frag(frag));
|
|
|
|
}
|
|
|
|
|
2009-08-06 02:23:26 +04:00
|
|
|
return ibv_post_send(ep->qps[qp].qp->lcl_qp, sr_desc, &bad_wr);
|
2009-03-25 19:53:26 +03:00
|
|
|
}
|
|
|
|
|
2015-01-06 18:47:07 +03:00
|
|
|
/* called with the endpoint lock held */
|
|
|
|
static inline int mca_btl_openib_endpoint_credit_acquire (struct mca_btl_base_endpoint_t *endpoint, int qp,
|
|
|
|
int prio, size_t size, bool *do_rdma,
|
|
|
|
mca_btl_openib_send_frag_t *frag, bool queue_frag)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t *openib_btl = endpoint->endpoint_btl;
|
|
|
|
mca_btl_openib_header_t *hdr = frag->hdr;
|
|
|
|
size_t eager_limit;
|
|
|
|
int32_t cm_return;
|
|
|
|
|
|
|
|
eager_limit = mca_btl_openib_component.eager_limit +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t) +
|
|
|
|
sizeof(mca_btl_openib_control_header_t);
|
|
|
|
|
|
|
|
if (!(prio && size < eager_limit && acquire_eager_rdma_send_credit(endpoint) == OPAL_SUCCESS)) {
|
|
|
|
*do_rdma = false;
|
2015-02-11 23:47:56 +03:00
|
|
|
prio = !prio;
|
2015-01-06 18:47:07 +03:00
|
|
|
|
|
|
|
if (BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
if (OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, -1) < 0) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, 1);
|
|
|
|
if (queue_frag) {
|
|
|
|
opal_list_append(&endpoint->qps[qp].no_credits_pending_frags[prio],
|
|
|
|
(opal_list_item_t *)frag);
|
|
|
|
}
|
|
|
|
|
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if(OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, -1) < 0) {
|
|
|
|
OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, 1);
|
|
|
|
if (queue_frag) {
|
|
|
|
OPAL_THREAD_LOCK(&openib_btl->ib_lock);
|
|
|
|
opal_list_append(&openib_btl->qps[qp].u.srq_qp.pending_frags[prio],
|
|
|
|
(opal_list_item_t *)frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&openib_btl->ib_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* High priority frag. Try to send over eager RDMA */
|
|
|
|
*do_rdma = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Set all credits */
|
|
|
|
BTL_OPENIB_GET_CREDITS(endpoint->eager_rdma_local.credits, hdr->credits);
|
|
|
|
if (hdr->credits) {
|
|
|
|
hdr->credits |= BTL_OPENIB_RDMA_CREDITS_FLAG;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!*do_rdma) {
|
|
|
|
if (BTL_OPENIB_QP_TYPE_PP(qp) && 0 == hdr->credits) {
|
|
|
|
BTL_OPENIB_GET_CREDITS(endpoint->qps[qp].u.pp_qp.rd_credits, hdr->credits);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
hdr->credits |= (qp << 11);
|
|
|
|
}
|
|
|
|
|
|
|
|
BTL_OPENIB_GET_CREDITS(endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
/* cm_seen is only 8 bytes, but cm_return is 32 bytes */
|
|
|
|
if(cm_return > 255) {
|
|
|
|
hdr->cm_seen = 255;
|
|
|
|
cm_return -= 255;
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
} else {
|
|
|
|
hdr->cm_seen = cm_return;
|
|
|
|
}
|
|
|
|
|
|
|
|
return OPAL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* called with the endpoint lock held. */
|
|
|
|
static inline void mca_btl_openib_endpoint_credit_release (struct mca_btl_base_endpoint_t *endpoint, int qp,
|
|
|
|
bool do_rdma, mca_btl_openib_send_frag_t *frag)
|
|
|
|
{
|
|
|
|
mca_btl_openib_header_t *hdr = frag->hdr;
|
|
|
|
|
|
|
|
if (BTL_OPENIB_IS_RDMA_CREDITS(hdr->credits)) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_local.credits, BTL_OPENIB_CREDITS(hdr->credits));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (do_rdma) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, 1);
|
|
|
|
} else {
|
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
OPAL_THREAD_ADD32 (&endpoint->qps[qp].u.pp_qp.rd_credits, hdr->credits);
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, 1);
|
|
|
|
} else if BTL_OPENIB_QP_TYPE_SRQ(qp){
|
|
|
|
mca_btl_openib_module_t *openib_btl = endpoint->endpoint_btl;
|
|
|
|
OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-08-07 03:40:35 +04:00
|
|
|
END_C_DECLS
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
#endif
|