2007-12-21 09:02:00 +03:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; -*- */
|
2005-07-01 01:28:35 +04:00
|
|
|
/*
|
2005-11-05 22:57:48 +03:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2007-12-21 09:02:00 +03:00
|
|
|
* Copyright (c) 2004-2007 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2008-01-21 15:11:18 +03:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
2005-07-01 01:28:35 +04:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
* Copyright (c) 2006-2007 Cisco Systems, Inc. All rights reserved.
|
|
|
|
* Copyright (c) 2006-2007 Los Alamos National Security, LLC. All rights
|
2008-01-21 15:11:18 +03:00
|
|
|
* reserved.
|
2007-09-24 14:11:52 +04:00
|
|
|
* Copyright (c) 2006-2007 Voltaire All rights reserved.
|
2007-11-28 10:18:59 +03:00
|
|
|
* Copyright (c) 2006-2007 Mellanox Technologies, Inc. All rights reserved.
|
2006-11-22 05:06:52 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* $COPYRIGHT$
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* Additional copyrights may follow
|
2008-01-21 15:11:18 +03:00
|
|
|
*
|
2005-07-01 01:28:35 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "ompi_config.h"
|
2007-08-07 03:40:35 +04:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
#include <sys/time.h>
|
|
|
|
#include <time.h>
|
2008-01-21 15:11:18 +03:00
|
|
|
#include <errno.h>
|
|
|
|
#include <string.h>
|
2007-08-07 03:40:35 +04:00
|
|
|
|
2006-02-07 18:20:44 +03:00
|
|
|
#include "orte/mca/oob/base/base.h"
|
|
|
|
#include "orte/mca/rml/rml.h"
|
|
|
|
#include "orte/mca/errmgr/errmgr.h"
|
2007-08-07 03:40:35 +04:00
|
|
|
|
|
|
|
#include "ompi/types.h"
|
|
|
|
#include "ompi/mca/pml/base/pml_base_sendreq.h"
|
2008-01-21 15:11:18 +03:00
|
|
|
#include "ompi/class/ompi_free_list.h"
|
2007-08-07 03:40:35 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
#include "btl_openib_endpoint.h"
|
2005-07-01 01:28:35 +04:00
|
|
|
#include "btl_openib_proc.h"
|
2007-11-28 10:18:59 +03:00
|
|
|
#include "btl_openib_xrc.h"
|
2008-01-28 13:38:08 +03:00
|
|
|
#include "btl_openib_async.h"
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
static void mca_btl_openib_endpoint_construct(mca_btl_base_endpoint_t* endpoint);
|
|
|
|
static void mca_btl_openib_endpoint_destruct(mca_btl_base_endpoint_t* endpoint);
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
static int post_send(mca_btl_openib_endpoint_t *ep,
|
|
|
|
mca_btl_openib_send_frag_t *frag, const bool rdma)
|
2007-03-14 17:38:56 +03:00
|
|
|
{
|
2007-11-28 10:12:44 +03:00
|
|
|
mca_btl_openib_module_t *openib_btl = ep->endpoint_btl;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_base_segment_t *seg = &to_base_frag(frag)->segment;
|
|
|
|
struct ibv_sge *sg = &to_com_frag(frag)->sg_entry;
|
|
|
|
struct ibv_send_wr *sr_desc = &to_out_frag(frag)->sr_desc;
|
2007-11-28 10:12:44 +03:00
|
|
|
struct ibv_send_wr *bad_wr;
|
|
|
|
int qp = to_base_frag(frag)->base.order;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
sg->length = seg->seg_len + sizeof(mca_btl_openib_header_t) +
|
2007-12-09 17:05:13 +03:00
|
|
|
(rdma ? sizeof(mca_btl_openib_footer_t) : 0) + frag->coalesced_length;
|
2007-03-14 17:38:56 +03:00
|
|
|
|
2008-01-07 13:19:07 +03:00
|
|
|
sr_desc->send_flags = ib_send_flags(sg->length, ep);
|
2007-03-14 17:38:56 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(ep->nbo)
|
2007-11-28 10:11:14 +03:00
|
|
|
BTL_OPENIB_HEADER_HTON(*frag->hdr);
|
2007-03-14 17:38:56 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(rdma) {
|
2007-08-27 15:37:01 +04:00
|
|
|
int32_t head;
|
2007-11-28 10:12:44 +03:00
|
|
|
mca_btl_openib_footer_t* ftr =
|
2007-12-09 17:05:13 +03:00
|
|
|
(mca_btl_openib_footer_t*)(((char*)frag->hdr) + sg->length -
|
|
|
|
sizeof(mca_btl_openib_footer_t));
|
2007-11-28 10:11:14 +03:00
|
|
|
sr_desc->opcode = IBV_WR_RDMA_WRITE;
|
|
|
|
MCA_BTL_OPENIB_RDMA_FRAG_SET_SIZE(ftr, sg->length);
|
2007-03-14 17:38:56 +03:00
|
|
|
MCA_BTL_OPENIB_RDMA_MAKE_LOCAL(ftr);
|
|
|
|
#if OMPI_ENABLE_DEBUG
|
2007-12-17 13:26:18 +03:00
|
|
|
ftr->seq = ep->eager_rdma_remote.seq++;
|
2007-03-14 17:38:56 +03:00
|
|
|
#endif
|
2007-11-28 10:12:44 +03:00
|
|
|
if(ep->nbo)
|
|
|
|
BTL_OPENIB_FOOTER_HTON(*ftr);
|
2007-03-14 17:38:56 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
sr_desc->wr.rdma.rkey = ep->eager_rdma_remote.rkey;
|
|
|
|
MCA_BTL_OPENIB_RDMA_MOVE_INDEX(ep->eager_rdma_remote.head, head);
|
2007-11-28 10:11:14 +03:00
|
|
|
sr_desc->wr.rdma.remote_addr =
|
2007-11-28 10:12:44 +03:00
|
|
|
ep->eager_rdma_remote.base.lval +
|
2007-08-27 15:37:01 +04:00
|
|
|
head * openib_btl->eager_rdma_frag_size +
|
2007-03-14 17:38:56 +03:00
|
|
|
sizeof(mca_btl_openib_header_t) +
|
|
|
|
mca_btl_openib_component.eager_limit +
|
|
|
|
sizeof(mca_btl_openib_footer_t);
|
2007-11-28 10:11:14 +03:00
|
|
|
sr_desc->wr.rdma.remote_addr -= sg->length;
|
2007-03-14 17:38:56 +03:00
|
|
|
} else {
|
2007-11-28 10:18:59 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
sr_desc->opcode = IBV_WR_SEND;
|
|
|
|
} else {
|
2007-11-28 10:11:14 +03:00
|
|
|
sr_desc->opcode = IBV_WR_SEND_WITH_IMM;
|
2007-11-28 10:12:44 +03:00
|
|
|
sr_desc->imm_data = ep->rem_info.rem_index;
|
2007-03-14 17:38:56 +03:00
|
|
|
}
|
|
|
|
}
|
2007-11-28 10:11:14 +03:00
|
|
|
|
2007-11-28 10:18:59 +03:00
|
|
|
#if HAVE_XRC
|
|
|
|
if(BTL_OPENIB_QP_TYPE_XRC(qp))
|
|
|
|
sr_desc->xrc_remote_srq_num = ep->rem_info.rem_srqs[qp].rem_srq_num;
|
|
|
|
#endif
|
2007-12-09 17:15:35 +03:00
|
|
|
assert(sg->addr == (uint64_t)(uintptr_t)frag->hdr);
|
2007-11-28 10:11:14 +03:00
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
return ibv_post_send(ep->qps[qp].qp->lcl_qp, sr_desc, &bad_wr);
|
2007-03-14 17:38:56 +03:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
static inline int acruire_wqe(mca_btl_openib_endpoint_t *ep,
|
2007-11-28 10:12:44 +03:00
|
|
|
mca_btl_openib_send_frag_t *frag)
|
2006-09-05 20:00:18 +04:00
|
|
|
{
|
2007-11-28 10:12:44 +03:00
|
|
|
int qp = to_base_frag(frag)->base.order;
|
|
|
|
int prio = !(to_base_frag(frag)->base.des_flags & MCA_BTL_DES_FLAGS_PRIORITY);
|
2006-09-05 20:04:04 +04:00
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
if(qp_get_wqe(ep, qp) < 0) {
|
|
|
|
qp_put_wqe(ep, qp);
|
2008-01-21 15:11:18 +03:00
|
|
|
OPAL_THREAD_LOCK(&ep->qps[qp].qp->lock);
|
2007-11-28 10:15:20 +03:00
|
|
|
opal_list_append(&ep->qps[qp].qp->pending_frags[prio],
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
(opal_list_item_t *)frag);
|
2008-01-21 15:11:18 +03:00
|
|
|
OPAL_THREAD_UNLOCK(&ep->qps[qp].qp->lock);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
return OMPI_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
return OMPI_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
acquire_eager_rdma_send_credit(mca_btl_openib_endpoint_t *endpoint)
|
|
|
|
{
|
|
|
|
if(OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, -1) < 0) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, 1);
|
|
|
|
return OMPI_ERR_OUT_OF_RESOURCE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return OMPI_SUCCESS;
|
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
static int acquire_send_credit(mca_btl_openib_endpoint_t *endpoint,
|
|
|
|
mca_btl_openib_send_frag_t *frag)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t *openib_btl = endpoint->endpoint_btl;
|
|
|
|
int qp = to_base_frag(frag)->base.order;
|
|
|
|
int prio = !(to_base_frag(frag)->base.des_flags & MCA_BTL_DES_FLAGS_PRIORITY);
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
2007-11-28 10:12:44 +03:00
|
|
|
if(OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, -1) < 0) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, 1);
|
|
|
|
opal_list_append(&endpoint->qps[qp].pending_frags[prio],
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
(opal_list_item_t *)frag);
|
|
|
|
return OMPI_ERR_OUT_OF_RESOURCE;
|
2006-09-05 20:00:18 +04:00
|
|
|
}
|
2007-11-28 10:20:26 +03:00
|
|
|
} else {
|
2007-11-28 10:18:59 +03:00
|
|
|
if(OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, -1) < 0)
|
|
|
|
{
|
2007-11-28 10:12:44 +03:00
|
|
|
OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, 1);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
OPAL_THREAD_LOCK(&openib_btl->ib_lock);
|
2007-11-28 10:12:44 +03:00
|
|
|
opal_list_append(&openib_btl->qps[qp].u.srq_qp.pending_frags[prio],
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
(opal_list_item_t *)frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&openib_btl->ib_lock);
|
|
|
|
return OMPI_ERR_OUT_OF_RESOURCE;
|
2006-09-05 20:00:18 +04:00
|
|
|
}
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
2007-11-28 10:12:44 +03:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
return OMPI_SUCCESS;
|
2006-09-05 20:00:18 +04:00
|
|
|
}
|
|
|
|
|
2007-08-01 16:15:43 +04:00
|
|
|
#define GET_CREDITS(FROM, TO) \
|
|
|
|
do { \
|
|
|
|
TO = FROM; \
|
|
|
|
} while(0 == OPAL_ATOMIC_CMPSET_32(&FROM, TO, 0))
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
/* this function is called with endpoint->endpoint_lock held */
|
|
|
|
int mca_btl_openib_endpoint_post_send(mca_btl_openib_endpoint_t *endpoint,
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_frag_t *frag)
|
2008-01-21 15:11:18 +03:00
|
|
|
{
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_header_t *hdr = frag->hdr;
|
|
|
|
mca_btl_base_descriptor_t *des = &to_base_frag(frag)->base;
|
2007-11-28 10:12:44 +03:00
|
|
|
int qp, ib_rc;
|
2007-08-01 16:15:43 +04:00
|
|
|
int32_t cm_return;
|
2007-11-28 10:12:44 +03:00
|
|
|
bool do_rdma = false;
|
2007-12-09 17:05:13 +03:00
|
|
|
size_t eager_limit;
|
2006-09-05 20:00:18 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(OPAL_LIKELY(des->order == MCA_BTL_NO_ORDER))
|
|
|
|
des->order = frag->qp_idx;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
qp = des->order;
|
2006-09-12 13:17:59 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(acruire_wqe(endpoint, frag) != OMPI_SUCCESS)
|
2007-12-09 17:14:11 +03:00
|
|
|
return OMPI_ERR_RESOURCE_BUSY;
|
2007-11-28 10:12:44 +03:00
|
|
|
|
2007-12-09 17:05:13 +03:00
|
|
|
eager_limit = mca_btl_openib_component.eager_limit +
|
|
|
|
sizeof(mca_btl_openib_header_coalesced_t) +
|
|
|
|
sizeof(mca_btl_openib_control_header_t);
|
|
|
|
if(des->des_src->seg_len + frag->coalesced_length <= eager_limit &&
|
2007-11-28 10:12:44 +03:00
|
|
|
(des->des_flags & MCA_BTL_DES_FLAGS_PRIORITY)) {
|
|
|
|
/* High priority frag. Try to send over eager RDMA */
|
|
|
|
if(acquire_eager_rdma_send_credit(endpoint) == OMPI_SUCCESS)
|
|
|
|
do_rdma = true;
|
2006-09-12 13:17:59 +04:00
|
|
|
}
|
2007-07-24 19:19:51 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(!do_rdma && acquire_send_credit(endpoint, frag) != OMPI_SUCCESS) {
|
2007-11-28 10:15:20 +03:00
|
|
|
qp_put_wqe(endpoint, qp);
|
2007-12-09 17:14:11 +03:00
|
|
|
return OMPI_ERR_RESOURCE_BUSY;
|
2007-07-24 19:19:51 +04:00
|
|
|
}
|
2007-08-01 16:15:43 +04:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
GET_CREDITS(endpoint->eager_rdma_local.credits, hdr->credits);
|
|
|
|
if(hdr->credits)
|
|
|
|
hdr->credits |= BTL_OPENIB_RDMA_CREDITS_FLAG;
|
|
|
|
|
|
|
|
if(!do_rdma) {
|
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp) && 0 == hdr->credits) {
|
|
|
|
GET_CREDITS(endpoint->qps[qp].u.pp_qp.rd_credits, hdr->credits);
|
2006-09-12 13:17:59 +04:00
|
|
|
}
|
2007-12-09 16:56:13 +03:00
|
|
|
} else {
|
|
|
|
hdr->credits |= (qp << 11);
|
|
|
|
}
|
2007-11-28 10:12:44 +03:00
|
|
|
|
2007-12-09 16:56:13 +03:00
|
|
|
GET_CREDITS(endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
/* cm_seen is only 8 bytes, but cm_return is 32 bytes */
|
|
|
|
if(cm_return > 255) {
|
|
|
|
hdr->cm_seen = 255;
|
|
|
|
cm_return -= 255;
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
} else {
|
|
|
|
hdr->cm_seen = cm_return;
|
2005-07-12 17:38:54 +04:00
|
|
|
}
|
2007-08-07 03:40:35 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
ib_rc = post_send(endpoint, frag, do_rdma);
|
2007-11-28 10:12:44 +03:00
|
|
|
|
|
|
|
if(!ib_rc)
|
|
|
|
return OMPI_SUCCESS;
|
|
|
|
|
|
|
|
if(endpoint->nbo)
|
|
|
|
BTL_OPENIB_HEADER_NTOH(*hdr);
|
|
|
|
|
|
|
|
if(BTL_OPENIB_IS_RDMA_CREDITS(hdr->credits)) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_local.credits,
|
|
|
|
BTL_OPENIB_CREDITS(hdr->credits));
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
qp_put_wqe(endpoint, qp);
|
2007-11-28 10:12:44 +03:00
|
|
|
|
|
|
|
if(do_rdma) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, 1);
|
|
|
|
} else {
|
|
|
|
if(BTL_OPENIB_QP_TYPE_PP(qp)) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.rd_credits,
|
|
|
|
hdr->credits);
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.sd_credits, 1);
|
2007-11-28 10:18:59 +03:00
|
|
|
} else if BTL_OPENIB_QP_TYPE_SRQ(qp){
|
2007-11-28 10:12:44 +03:00
|
|
|
mca_btl_openib_module_t *openib_btl = endpoint->endpoint_btl;
|
|
|
|
OPAL_THREAD_ADD32(&openib_btl->qps[qp].u.srq_qp.sd_credits, 1);
|
|
|
|
}
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
BTL_ERROR(("error posting send request error %d: %s\n",
|
|
|
|
ib_rc, strerror(ib_rc)));
|
|
|
|
return OMPI_ERROR;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
OBJ_CLASS_INSTANCE(mca_btl_openib_endpoint_t,
|
|
|
|
opal_list_item_t, mca_btl_openib_endpoint_construct,
|
2005-07-01 01:28:35 +04:00
|
|
|
mca_btl_openib_endpoint_destruct);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize state of the endpoint instance.
|
|
|
|
*
|
|
|
|
*/
|
2007-11-28 10:15:20 +03:00
|
|
|
static mca_btl_openib_qp_t *endpoint_alloc_qp(void)
|
2007-03-14 17:33:24 +03:00
|
|
|
{
|
2007-11-28 10:15:20 +03:00
|
|
|
mca_btl_openib_qp_t *qp = calloc(1, sizeof(mca_btl_openib_qp_t));
|
|
|
|
if(!qp) {
|
|
|
|
BTL_ERROR(("Failed to allocate memory for qp"));
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
OBJ_CONSTRUCT(&qp->pending_frags[0], opal_list_t);
|
|
|
|
OBJ_CONSTRUCT(&qp->pending_frags[1], opal_list_t);
|
|
|
|
OBJ_CONSTRUCT(&qp->lock, opal_mutex_t);
|
|
|
|
|
|
|
|
return qp;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
endpoint_init_qp_pp(mca_btl_openib_endpoint_qp_t *ep_qp, const int qp)
|
|
|
|
{
|
|
|
|
mca_btl_openib_qp_info_t *qp_info = &mca_btl_openib_component.qp_infos[qp];
|
|
|
|
ep_qp->qp = endpoint_alloc_qp();
|
|
|
|
ep_qp->qp->users++;
|
|
|
|
|
|
|
|
/* local credits are set here such that on initial posting
|
|
|
|
* of the receive buffers we end up with zero credits to return
|
2008-01-21 15:11:18 +03:00
|
|
|
* to our peer. The peer initializes his sd_credits to reflect this
|
|
|
|
* below. Note that this may be a problem for iWARP as the sender
|
|
|
|
* now has credits even if the receive buffers are not yet posted
|
2007-11-28 10:15:20 +03:00
|
|
|
*/
|
|
|
|
ep_qp->u.pp_qp.rd_credits = -qp_info->rd_num;
|
|
|
|
|
|
|
|
ep_qp->u.pp_qp.rd_posted = 0;
|
|
|
|
ep_qp->u.pp_qp.cm_sent = 0;
|
|
|
|
ep_qp->u.pp_qp.cm_return = -qp_info->u.pp_qp.rd_rsv;
|
|
|
|
ep_qp->u.pp_qp.cm_received = qp_info->u.pp_qp.rd_rsv;
|
|
|
|
|
|
|
|
/* initialize the local view of credits */
|
|
|
|
ep_qp->u.pp_qp.sd_credits = qp_info->rd_num;
|
|
|
|
|
|
|
|
/* number of available send WQEs */
|
|
|
|
ep_qp->qp->sd_wqe = qp_info->rd_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
endpoint_init_qp_srq(mca_btl_openib_endpoint_qp_t *ep_qp, const int qp)
|
|
|
|
{
|
|
|
|
ep_qp->qp = endpoint_alloc_qp();
|
|
|
|
ep_qp->qp->users++;
|
|
|
|
|
|
|
|
/* number of available send WQEs */
|
|
|
|
ep_qp->qp->sd_wqe = mca_btl_openib_component.qp_infos[qp].u.srq_qp.sd_max;
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:18:59 +03:00
|
|
|
static void
|
2008-01-29 16:15:21 +03:00
|
|
|
endpoint_init_qp_xrc(mca_btl_base_endpoint_t *ep, const int qp)
|
2007-11-28 10:18:59 +03:00
|
|
|
{
|
2008-01-29 16:15:21 +03:00
|
|
|
int max = ep->endpoint_btl->hca->ib_dev_attr.max_qp_wr -
|
|
|
|
(mca_btl_openib_component.use_eager_rdma ?
|
|
|
|
mca_btl_openib_component.max_eager_rdma : 0);
|
|
|
|
mca_btl_openib_endpoint_qp_t *ep_qp = &ep->qps[qp];
|
|
|
|
ep_qp->qp = ep->ib_addr->qp;
|
2007-11-28 10:21:07 +03:00
|
|
|
ep_qp->qp->sd_wqe += mca_btl_openib_component.qp_infos[qp].u.srq_qp.sd_max;
|
2008-01-29 16:15:21 +03:00
|
|
|
/* make sure that we don't overrun maximum supported by hca */
|
|
|
|
if (ep_qp->qp->sd_wqe > max)
|
|
|
|
ep_qp->qp->sd_wqe = max;
|
2007-11-28 10:18:59 +03:00
|
|
|
ep_qp->qp->users++;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void endpoint_init_qp(mca_btl_base_endpoint_t *ep, const int qp)
|
2007-11-28 10:15:20 +03:00
|
|
|
{
|
2007-11-28 10:18:59 +03:00
|
|
|
mca_btl_openib_endpoint_qp_t *ep_qp = &ep->qps[qp];
|
|
|
|
|
2007-11-28 10:15:20 +03:00
|
|
|
ep_qp->rd_credit_send_lock = 0;
|
|
|
|
ep_qp->credit_frag = NULL;
|
|
|
|
|
|
|
|
OBJ_CONSTRUCT(&ep_qp->pending_frags[0], opal_list_t);
|
|
|
|
OBJ_CONSTRUCT(&ep_qp->pending_frags[1], opal_list_t);
|
|
|
|
switch(BTL_OPENIB_QP_TYPE(qp)) {
|
|
|
|
case MCA_BTL_OPENIB_PP_QP:
|
|
|
|
endpoint_init_qp_pp(ep_qp, qp);
|
|
|
|
break;
|
|
|
|
case MCA_BTL_OPENIB_SRQ_QP:
|
|
|
|
endpoint_init_qp_srq(ep_qp, qp);
|
|
|
|
break;
|
2007-11-28 10:18:59 +03:00
|
|
|
case MCA_BTL_OPENIB_XRC_QP:
|
2007-11-28 10:21:07 +03:00
|
|
|
if(NULL == ep->ib_addr->qp)
|
|
|
|
ep->ib_addr->qp = endpoint_alloc_qp();
|
2008-01-29 16:15:21 +03:00
|
|
|
endpoint_init_qp_xrc(ep, qp);
|
2007-11-28 10:18:59 +03:00
|
|
|
break;
|
2007-11-28 10:15:20 +03:00
|
|
|
default:
|
|
|
|
BTL_ERROR(("Wrong QP type"));
|
|
|
|
break;
|
|
|
|
}
|
2007-03-14 17:33:24 +03:00
|
|
|
}
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-11-28 10:21:07 +03:00
|
|
|
void mca_btl_openib_endpoint_init(mca_btl_openib_module_t *btl,
|
|
|
|
mca_btl_base_endpoint_t *ep)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
int qp;
|
2007-11-28 10:21:07 +03:00
|
|
|
|
|
|
|
ep->endpoint_btl = btl;
|
|
|
|
ep->use_eager_rdma = btl->hca->use_eager_rdma &
|
|
|
|
mca_btl_openib_component.use_eager_rdma;
|
|
|
|
ep->subnet_id = btl->port_info.subnet_id;
|
|
|
|
|
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++)
|
|
|
|
endpoint_init_qp(ep, qp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mca_btl_openib_endpoint_construct(mca_btl_base_endpoint_t* endpoint)
|
|
|
|
{
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* setup qp structures */
|
2007-11-28 10:18:59 +03:00
|
|
|
endpoint->qps = (mca_btl_openib_endpoint_qp_t*)
|
|
|
|
calloc(mca_btl_openib_component.num_qps,
|
|
|
|
sizeof(mca_btl_openib_endpoint_qp_t));
|
|
|
|
if (MCA_BTL_XRC_ENABLED) {
|
|
|
|
endpoint->rem_info.rem_qps = (mca_btl_openib_rem_qp_info_t*)
|
|
|
|
calloc(1, sizeof(mca_btl_openib_rem_qp_info_t));
|
|
|
|
endpoint->rem_info.rem_srqs = (mca_btl_openib_rem_srq_info_t*)
|
|
|
|
calloc(mca_btl_openib_component.num_xrc_qps,
|
|
|
|
sizeof(mca_btl_openib_rem_srq_info_t));
|
|
|
|
} else {
|
2007-11-28 10:15:20 +03:00
|
|
|
endpoint->rem_info.rem_qps = (mca_btl_openib_rem_qp_info_t*)
|
|
|
|
calloc(mca_btl_openib_component.num_qps,
|
|
|
|
sizeof(mca_btl_openib_rem_qp_info_t));
|
2007-11-28 10:18:59 +03:00
|
|
|
endpoint->rem_info.rem_srqs = NULL;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:18:59 +03:00
|
|
|
endpoint->ib_addr = NULL;
|
2008-02-04 17:03:38 +03:00
|
|
|
endpoint->xrc_recv_qp_num = 0;
|
2005-07-01 01:28:35 +04:00
|
|
|
endpoint->endpoint_btl = 0;
|
|
|
|
endpoint->endpoint_proc = 0;
|
|
|
|
endpoint->endpoint_tstamp = 0.0;
|
|
|
|
endpoint->endpoint_state = MCA_BTL_IB_CLOSED;
|
|
|
|
endpoint->endpoint_retries = 0;
|
2005-10-21 06:21:45 +04:00
|
|
|
OBJ_CONSTRUCT(&endpoint->endpoint_lock, opal_mutex_t);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
OBJ_CONSTRUCT(&endpoint->pending_lazy_frags, opal_list_t);
|
2006-09-28 15:41:45 +04:00
|
|
|
OBJ_CONSTRUCT(&endpoint->pending_get_frags, opal_list_t);
|
|
|
|
OBJ_CONSTRUCT(&endpoint->pending_put_frags, opal_list_t);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2005-11-10 23:15:02 +03:00
|
|
|
endpoint->get_tokens = mca_btl_openib_component.ib_qp_ous_rd_atom;
|
|
|
|
|
2006-03-26 12:30:50 +04:00
|
|
|
/* initialize RDMA eager related parts */
|
|
|
|
endpoint->eager_recv_count = 0;
|
|
|
|
memset(&endpoint->eager_rdma_remote, 0,
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
sizeof(mca_btl_openib_eager_rdma_remote_t));
|
2007-03-14 17:33:24 +03:00
|
|
|
memset(&endpoint->eager_rdma_local, 0,
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
sizeof(mca_btl_openib_eager_rdma_local_t));
|
2006-03-26 19:02:43 +04:00
|
|
|
OBJ_CONSTRUCT(&endpoint->eager_rdma_local.lock, opal_mutex_t);
|
2006-03-26 12:30:50 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
endpoint->rem_info.rem_lid = 0;
|
|
|
|
endpoint->rem_info.rem_subnet_id = 0;
|
Bring over all the work from the /tmp/ib-hw-detect branch. In
addition to my design and testing, it was conceptually approved by
Gil, Gleb, Pasha, Brad, and Galen. Functionally [probably somewhat
lightly] tested by Galen. We may still have to shake out some bugs
during the next few months, but it seems to be working for all the
cases that I can throw at it.
Here's a summary of the changes from that branch:
* Move MCA parameter registration to a new file (btl_openib_mca.c):
* Properly check the retun status of registering MCA params
* Check for valid values of MCA parameters
* Make help strings better
* Otherwise, the only default value of an MCA param that was
changed was max_btls; it went from 4 to -1 (meaning: use all
available)
* Properly prototyped internal functions in _component.c
* Made a bunch of functions static that didn't need to be public
* Renamed to remove "mca_" prefix from static functions
* Call new MCA param registration function
* Call new INI file read/lookup/finalize functions
* Updated a bunch of macros to be "BTL_" instead of "ORTE_"
* Be a little more consistent with return values
* Handle -1 for the max_btls MCA param
* Fixed a free() that should have been an OBJ_RELEASE()
* Some re-indenting
* Added INI-file parsing
* New flex file: btl_openib_ini.l
* New default HCA params .ini file (probably to be expanded over
time by other HCA vendors)
* Added more show_help messages for parsing problems
* Read in INI files and cache the values for later lookup
* When component opens an HCA, lookup to see if any corresponding
values were found in the INI files (ID'ed by the HCA vendor_id
and vendor_part_id)
* Added btl_openib_verbose MCA param that shows what the INI-file
stuff does (e.g., shows which MTU your HCA ends up using)
* Added btl_openib_hca_param_files as a colon-delimited list of INI
files to check for values during startup (in order,
left-to-right, just like the MCA base directory param).
* MTU is currently the only value supported in this framework.
* It is not a fatal error if we don't find params for the HCA in
the INI file(s). Instead, just print a warning. New MCA param
btl_openib_warn_no_hca_params_found can be used to disable
printing the warning.
* Add MTU to peer negotiation when making a connection
* Exchange maximum MTU; select the lesser of the two
This commit was SVN r11182.
2006-08-14 23:30:37 +04:00
|
|
|
endpoint->rem_info.rem_mtu = 0;
|
2007-01-13 02:14:45 +03:00
|
|
|
endpoint->nbo = false;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
endpoint->use_eager_rdma = false;
|
|
|
|
endpoint->eager_rdma_remote.tokens = 0;
|
|
|
|
endpoint->eager_rdma_local.credits = 0;
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Destroy a endpoint
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void mca_btl_openib_endpoint_destruct(mca_btl_base_endpoint_t* endpoint)
|
|
|
|
{
|
2007-05-09 01:47:21 +04:00
|
|
|
bool pval_clean = false;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
int qp;
|
2007-11-28 10:15:54 +03:00
|
|
|
|
2007-05-09 01:47:21 +04:00
|
|
|
/* Release memory resources */
|
|
|
|
do {
|
|
|
|
/* Make sure that mca_btl_openib_endpoint_connect_eager_rdma ()
|
|
|
|
* was not in "connect" or "bad" flow (failed to allocate memory)
|
2008-01-21 15:11:18 +03:00
|
|
|
* and changed the pointer back to NULL
|
2007-05-09 01:47:21 +04:00
|
|
|
*/
|
|
|
|
if(!opal_atomic_cmpset_ptr(&endpoint->eager_rdma_local.base.pval, NULL,
|
|
|
|
(void*)1)) {
|
2008-01-21 15:11:18 +03:00
|
|
|
if ((void*)1 != endpoint->eager_rdma_local.base.pval &&
|
2007-05-09 01:47:21 +04:00
|
|
|
NULL != endpoint->eager_rdma_local.base.pval) {
|
|
|
|
endpoint->endpoint_btl->super.btl_mpool->mpool_free(endpoint->endpoint_btl->super.btl_mpool,
|
2008-01-21 15:11:18 +03:00
|
|
|
endpoint->eager_rdma_local.base.pval,
|
2007-05-09 01:47:21 +04:00
|
|
|
(mca_mpool_base_registration_t*)endpoint->eager_rdma_local.reg);
|
|
|
|
pval_clean=true;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
pval_clean=true;
|
|
|
|
}
|
|
|
|
} while (!pval_clean);
|
|
|
|
|
|
|
|
/* Close opened QPs if we have them*/
|
2008-01-21 15:11:18 +03:00
|
|
|
for(qp = 0; qp < mca_btl_openib_component.num_qps; qp++) {
|
2007-11-28 10:15:54 +03:00
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(&endpoint->qps[qp].pending_frags[0]);
|
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(&endpoint->qps[qp].pending_frags[1]);
|
|
|
|
OBJ_DESTRUCT(&endpoint->qps[qp].pending_frags[0]);
|
|
|
|
OBJ_DESTRUCT(&endpoint->qps[qp].pending_frags[1]);
|
|
|
|
|
|
|
|
if(--endpoint->qps[qp].qp->users != 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(
|
|
|
|
&endpoint->qps[qp].qp->pending_frags[0]);
|
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(
|
|
|
|
&endpoint->qps[qp].qp->pending_frags[1]);
|
|
|
|
OBJ_DESTRUCT(&endpoint->qps[qp].qp->pending_frags[0]);
|
|
|
|
OBJ_DESTRUCT(&endpoint->qps[qp].qp->pending_frags[1]);
|
|
|
|
|
|
|
|
if(endpoint->qps[qp].qp->lcl_qp != NULL)
|
|
|
|
if(ibv_destroy_qp(endpoint->qps[qp].qp->lcl_qp))
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
BTL_ERROR(("Failed to destroy QP:%d\n", qp));
|
2007-11-28 10:15:54 +03:00
|
|
|
|
|
|
|
free(endpoint->qps[qp].qp);
|
2007-05-09 01:47:21 +04:00
|
|
|
}
|
2007-11-28 10:15:54 +03:00
|
|
|
|
|
|
|
/* free the qps */
|
|
|
|
free(endpoint->qps);
|
|
|
|
|
2008-02-04 17:03:38 +03:00
|
|
|
/* unregister xrc recv qp */
|
|
|
|
#if HAVE_XRC
|
|
|
|
if (0 != endpoint->xrc_recv_qp_num) {
|
|
|
|
if(ibv_unreg_xrc_rcv_qp(endpoint->endpoint_btl->hca->xrc_domain,
|
|
|
|
endpoint->xrc_recv_qp_num)) {
|
|
|
|
BTL_ERROR(("Failed to unregister XRC recv QP:%d\n", endpoint->xrc_recv_qp_num));
|
2007-11-28 10:18:59 +03:00
|
|
|
}
|
|
|
|
}
|
2008-02-04 17:03:38 +03:00
|
|
|
#endif
|
2007-11-28 10:18:59 +03:00
|
|
|
|
2007-05-09 01:47:21 +04:00
|
|
|
OBJ_DESTRUCT(&endpoint->endpoint_lock);
|
|
|
|
/* Clean pending lists */
|
2007-11-28 10:11:14 +03:00
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(&endpoint->pending_lazy_frags);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
OBJ_DESTRUCT(&endpoint->pending_lazy_frags);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(&endpoint->pending_get_frags);
|
2007-05-09 01:47:21 +04:00
|
|
|
OBJ_DESTRUCT(&endpoint->pending_get_frags);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
MCA_BTL_OPENIB_CLEAN_PENDING_FRAGS(&endpoint->pending_put_frags);
|
2007-05-09 01:47:21 +04:00
|
|
|
OBJ_DESTRUCT(&endpoint->pending_put_frags);
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2007-08-07 03:40:35 +04:00
|
|
|
* call when the connect module has created all the qp's on an
|
|
|
|
* endpoint and needs to have some receive buffers posted
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
2007-08-07 03:40:35 +04:00
|
|
|
int mca_btl_openib_endpoint_post_recvs(mca_btl_openib_endpoint_t *endpoint)
|
2007-03-14 17:33:24 +03:00
|
|
|
{
|
2007-08-07 03:40:35 +04:00
|
|
|
int qp;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
2008-01-21 15:11:18 +03:00
|
|
|
for (qp = 0; qp < mca_btl_openib_component.num_qps; ++qp) {
|
|
|
|
if (BTL_OPENIB_QP_TYPE_PP(qp)) {
|
2008-01-09 13:39:35 +03:00
|
|
|
mca_btl_openib_endpoint_post_rr_nolock(endpoint, qp);
|
2007-11-28 10:20:26 +03:00
|
|
|
} else {
|
2007-11-28 17:52:31 +03:00
|
|
|
mca_btl_openib_post_srr(endpoint->endpoint_btl, qp);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-03-14 17:33:24 +03:00
|
|
|
return OMPI_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
|
|
|
|
/*
|
2007-08-07 03:40:35 +04:00
|
|
|
* called when the connect module has completed setup of an endpoint
|
2005-07-01 01:28:35 +04:00
|
|
|
*/
|
2007-08-07 03:40:35 +04:00
|
|
|
void mca_btl_openib_endpoint_connected(mca_btl_openib_endpoint_t *endpoint)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
2007-11-28 10:18:59 +03:00
|
|
|
opal_list_item_t *frag_item, *ep_item;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_frag_t *frag;
|
2007-11-28 10:18:59 +03:00
|
|
|
mca_btl_openib_endpoint_t *ep;
|
|
|
|
bool master = false;
|
|
|
|
|
|
|
|
if (MCA_BTL_XRC_ENABLED) {
|
|
|
|
OPAL_THREAD_LOCK(&endpoint->ib_addr->addr_lock);
|
|
|
|
if (MCA_BTL_IB_ADDR_CONNECTED == endpoint->ib_addr->status) {
|
|
|
|
/* We are not xrc master */
|
|
|
|
/* set our qp pointer to master qp */
|
|
|
|
master = false;
|
|
|
|
} else {
|
|
|
|
/* I'm master of XRC */
|
|
|
|
endpoint->ib_addr->status = MCA_BTL_IB_ADDR_CONNECTED;
|
|
|
|
master = true;
|
|
|
|
}
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2008-01-28 13:38:08 +03:00
|
|
|
/* Run over all qps and load alternative path */
|
2008-01-28 19:10:18 +03:00
|
|
|
#if OMPI_HAVE_THREADS
|
2008-02-20 16:44:05 +03:00
|
|
|
if (APM_ENABLED) {
|
2008-01-28 13:38:08 +03:00
|
|
|
int i;
|
|
|
|
if (MCA_BTL_XRC_ENABLED) {
|
2008-02-06 13:19:51 +03:00
|
|
|
if (master) {
|
2008-02-20 16:44:05 +03:00
|
|
|
mca_btl_openib_load_apm(endpoint->ib_addr->qp->lcl_qp, endpoint);
|
2008-02-06 13:19:51 +03:00
|
|
|
}
|
2008-01-28 13:38:08 +03:00
|
|
|
} else {
|
|
|
|
for(i = 0; i < mca_btl_openib_component.num_qps; i++) {
|
2008-02-20 16:44:05 +03:00
|
|
|
mca_btl_openib_load_apm(endpoint->qps[i].qp->lcl_qp, endpoint);
|
2008-01-28 13:38:08 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2008-01-28 19:10:18 +03:00
|
|
|
#endif
|
2008-01-28 13:38:08 +03:00
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
endpoint->endpoint_state = MCA_BTL_IB_CONNECTED;
|
2007-12-23 15:29:34 +03:00
|
|
|
endpoint->endpoint_btl->hca->non_eager_rdma_endpoints++;
|
|
|
|
|
2007-08-07 03:40:35 +04:00
|
|
|
/* The connection is correctly setup. Now we can decrease the
|
|
|
|
event trigger. */
|
2006-11-22 05:06:52 +03:00
|
|
|
opal_progress_event_users_decrement();
|
2006-04-15 02:28:05 +04:00
|
|
|
|
2007-08-07 03:40:35 +04:00
|
|
|
/* While there are frags in the list, process them */
|
|
|
|
while (!opal_list_is_empty(&(endpoint->pending_lazy_frags))) {
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
frag_item = opal_list_remove_first(&(endpoint->pending_lazy_frags));
|
2007-11-28 10:11:14 +03:00
|
|
|
frag = to_send_frag(frag_item);
|
2005-10-01 02:58:09 +04:00
|
|
|
/* We need to post this one */
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(OMPI_ERROR == mca_btl_openib_endpoint_post_send(endpoint, frag))
|
2008-01-21 15:11:18 +03:00
|
|
|
BTL_ERROR(("Error posting send"));
|
2005-10-01 02:58:09 +04:00
|
|
|
}
|
2007-11-28 10:16:52 +03:00
|
|
|
|
|
|
|
/* if upper layer called put or get before connection moved to connected
|
|
|
|
* state then we restart them here */
|
|
|
|
mca_btl_openib_frag_progress_pending_put_get(endpoint,
|
|
|
|
mca_btl_openib_component.rdma_qp);
|
2007-11-28 10:18:59 +03:00
|
|
|
|
|
|
|
if(MCA_BTL_XRC_ENABLED) {
|
|
|
|
while(master && !opal_list_is_empty(&endpoint->ib_addr->pending_ep)) {
|
|
|
|
ep_item = opal_list_remove_first(&endpoint->ib_addr->pending_ep);
|
|
|
|
ep = (mca_btl_openib_endpoint_t *)ep_item;
|
|
|
|
if (OMPI_SUCCESS != ompi_btl_openib_connect.bcf_start_connect(ep)) {
|
|
|
|
BTL_ERROR(("Failed to connect pending endpoint\n"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
OPAL_THREAD_UNLOCK(&endpoint->ib_addr->addr_lock);
|
|
|
|
}
|
2005-07-01 01:28:35 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Attempt to send a fragment using a given endpoint. If the endpoint is not
|
|
|
|
* connected, queue the fragment and start the connection as required.
|
|
|
|
*/
|
2007-11-28 10:16:52 +03:00
|
|
|
int mca_btl_openib_endpoint_send(mca_btl_base_endpoint_t* ep,
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_frag_t* frag)
|
2005-07-01 01:28:35 +04:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
2007-11-28 10:16:52 +03:00
|
|
|
OPAL_THREAD_LOCK(&ep->endpoint_lock);
|
|
|
|
rc = check_endpoint_state(ep, &to_base_frag(frag)->base,
|
|
|
|
&ep->pending_lazy_frags);
|
2005-07-01 01:28:35 +04:00
|
|
|
|
2007-12-09 17:14:11 +03:00
|
|
|
if(OPAL_LIKELY(rc == OMPI_SUCCESS))
|
2007-11-28 10:16:52 +03:00
|
|
|
rc = mca_btl_openib_endpoint_post_send(ep, frag);
|
|
|
|
OPAL_THREAD_UNLOCK(&ep->endpoint_lock);
|
|
|
|
|
2005-07-01 01:28:35 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2005-11-10 23:15:02 +03:00
|
|
|
/**
|
|
|
|
* Return control fragment.
|
|
|
|
*/
|
2006-01-13 02:42:44 +03:00
|
|
|
|
2006-09-12 13:17:59 +04:00
|
|
|
static void mca_btl_openib_endpoint_credits(
|
2005-11-10 23:15:02 +03:00
|
|
|
mca_btl_base_module_t* btl,
|
2007-11-28 10:13:34 +03:00
|
|
|
struct mca_btl_base_endpoint_t* ep,
|
|
|
|
struct mca_btl_base_descriptor_t* des,
|
2005-11-10 23:15:02 +03:00
|
|
|
int status)
|
|
|
|
{
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-08-27 15:34:59 +04:00
|
|
|
int qp;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
mca_btl_openib_send_control_frag_t *frag = to_send_control_frag(des);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
qp = frag->qp_idx;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
/* we don't acquire a WQE for credit message - so decrement.
|
|
|
|
* Note: doing it for QP used for credit management */
|
2007-11-28 10:15:20 +03:00
|
|
|
qp_get_wqe(ep, des->order);
|
2006-01-13 02:42:44 +03:00
|
|
|
|
2007-11-28 10:13:34 +03:00
|
|
|
if(check_send_credits(ep, qp) || check_eager_rdma_credits(ep))
|
|
|
|
mca_btl_openib_endpoint_send_credits(ep, qp);
|
2007-08-27 15:34:59 +04:00
|
|
|
else {
|
2007-11-28 10:13:34 +03:00
|
|
|
BTL_OPENIB_CREDITS_SEND_UNLOCK(ep, qp);
|
2007-08-27 15:34:59 +04:00
|
|
|
/* check one more time if credits are available after unlock */
|
2007-11-28 10:13:34 +03:00
|
|
|
send_credits(ep, qp);
|
2006-01-13 02:42:44 +03:00
|
|
|
}
|
2005-11-10 23:15:02 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Return credits to peer
|
|
|
|
*/
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2006-09-12 13:17:59 +04:00
|
|
|
void mca_btl_openib_endpoint_send_credits(mca_btl_openib_endpoint_t* endpoint,
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
const int qp)
|
2005-11-10 23:15:02 +03:00
|
|
|
{
|
|
|
|
mca_btl_openib_module_t* openib_btl = endpoint->endpoint_btl;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_control_frag_t* frag;
|
2006-09-05 20:02:09 +04:00
|
|
|
mca_btl_openib_rdma_credits_header_t *credits_hdr;
|
2008-01-09 13:26:21 +03:00
|
|
|
int rc;
|
2007-11-28 10:12:44 +03:00
|
|
|
bool do_rdma = false;
|
2007-08-01 16:15:43 +04:00
|
|
|
int32_t cm_return;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
|
|
|
|
frag = endpoint->qps[qp].credit_frag;
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
if(OPAL_UNLIKELY(NULL == frag)) {
|
2008-01-09 13:26:21 +03:00
|
|
|
frag = alloc_credit_frag(openib_btl);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
frag->qp_idx = qp;
|
|
|
|
endpoint->qps[qp].credit_frag = frag;
|
2007-11-28 10:11:14 +03:00
|
|
|
/* set those once and forever */
|
2007-11-28 10:13:34 +03:00
|
|
|
to_base_frag(frag)->base.order = mca_btl_openib_component.credits_qp;
|
2007-11-28 10:11:14 +03:00
|
|
|
to_base_frag(frag)->base.des_cbfunc = mca_btl_openib_endpoint_credits;
|
|
|
|
to_base_frag(frag)->base.des_cbdata = NULL;
|
|
|
|
to_com_frag(frag)->endpoint = endpoint;
|
|
|
|
frag->hdr->tag = MCA_BTL_TAG_BTL;
|
|
|
|
to_base_frag(frag)->segment.seg_len =
|
|
|
|
sizeof(mca_btl_openib_rdma_credits_header_t);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
assert(frag->qp_idx == qp);
|
2007-11-28 10:13:34 +03:00
|
|
|
credits_hdr = (mca_btl_openib_rdma_credits_header_t*)
|
2007-11-28 10:11:14 +03:00
|
|
|
to_base_frag(frag)->segment.seg_addr.pval;
|
2007-11-28 10:12:44 +03:00
|
|
|
if(acquire_eager_rdma_send_credit(endpoint) == MPI_SUCCESS) {
|
|
|
|
do_rdma = true;
|
|
|
|
} else {
|
2007-07-24 19:19:51 +04:00
|
|
|
if(OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_sent, 1) >
|
|
|
|
(mca_btl_openib_component.qp_infos[qp].u.pp_qp.rd_rsv - 1)) {
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_sent, -1);
|
2007-08-27 15:34:59 +04:00
|
|
|
BTL_OPENIB_CREDITS_SEND_UNLOCK(endpoint, qp);
|
2007-07-24 19:19:51 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-08-01 16:15:43 +04:00
|
|
|
GET_CREDITS(endpoint->qps[qp].u.pp_qp.rd_credits, frag->hdr->credits);
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
frag->hdr->cm_seen = 0;
|
2007-11-28 10:13:34 +03:00
|
|
|
GET_CREDITS(endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
if(cm_return > 255) {
|
|
|
|
frag->hdr->cm_seen = 255;
|
|
|
|
cm_return -= 255;
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_return, cm_return);
|
|
|
|
} else {
|
|
|
|
frag->hdr->cm_seen = cm_return;
|
2007-07-24 19:19:51 +04:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
GET_CREDITS(endpoint->eager_rdma_local.credits, credits_hdr->rdma_credits);
|
|
|
|
credits_hdr->qpn = qp;
|
2006-09-05 20:02:09 +04:00
|
|
|
credits_hdr->control.type = MCA_BTL_OPENIB_CONTROL_CREDITS;
|
2006-01-13 02:42:44 +03:00
|
|
|
|
2007-03-14 17:38:56 +03:00
|
|
|
if(endpoint->nbo)
|
2007-11-28 10:12:44 +03:00
|
|
|
BTL_OPENIB_RDMA_CREDITS_HEADER_HTON(*credits_hdr);
|
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
if((rc = post_send(endpoint, frag, do_rdma)) == 0)
|
2007-11-28 10:11:14 +03:00
|
|
|
return;
|
|
|
|
|
|
|
|
if(endpoint->nbo) {
|
|
|
|
BTL_OPENIB_HEADER_NTOH(*frag->hdr);
|
|
|
|
BTL_OPENIB_RDMA_CREDITS_HEADER_NTOH(*credits_hdr);
|
2005-11-10 23:15:02 +03:00
|
|
|
}
|
2007-11-28 10:11:14 +03:00
|
|
|
BTL_OPENIB_CREDITS_SEND_UNLOCK(endpoint, qp);
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.rd_credits,
|
|
|
|
frag->hdr->credits);
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_local.credits,
|
|
|
|
credits_hdr->rdma_credits);
|
|
|
|
if(do_rdma)
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->eager_rdma_remote.tokens, 1);
|
|
|
|
else
|
|
|
|
OPAL_THREAD_ADD32(&endpoint->qps[qp].u.pp_qp.cm_sent, -1);
|
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
BTL_ERROR(("error posting send request errno %d says %s", rc,
|
2007-11-28 10:11:14 +03:00
|
|
|
strerror(errno)));
|
2005-11-10 23:15:02 +03:00
|
|
|
}
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* local callback function for completion of eager rdma connect */
|
|
|
|
static void mca_btl_openib_endpoint_eager_rdma_connect_cb(
|
2006-03-26 12:30:50 +04:00
|
|
|
mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct mca_btl_base_descriptor_t* descriptor,
|
|
|
|
int status)
|
|
|
|
{
|
2008-01-29 17:35:33 +03:00
|
|
|
mca_btl_openib_hca_t *hca = endpoint->endpoint_btl->hca;
|
|
|
|
mca_btl_openib_module_t* openib_btl =
|
|
|
|
(mca_btl_openib_module_t*)btl;
|
|
|
|
OPAL_THREAD_ADD32(&hca->non_eager_rdma_endpoints, -1);
|
|
|
|
assert(hca->non_eager_rdma_endpoints >= 0);
|
|
|
|
OPAL_THREAD_ADD32(&openib_btl->eager_rdma_channels, 1);
|
2007-11-28 10:11:14 +03:00
|
|
|
MCA_BTL_IB_FRAG_RETURN(descriptor);
|
2006-03-26 12:30:50 +04:00
|
|
|
}
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
/* send the eager rdma connect message to the remote endpoint */
|
2006-03-26 12:30:50 +04:00
|
|
|
static int mca_btl_openib_endpoint_send_eager_rdma(
|
|
|
|
mca_btl_base_endpoint_t* endpoint)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t* openib_btl = endpoint->endpoint_btl;
|
|
|
|
mca_btl_openib_eager_rdma_header_t *rdma_hdr;
|
2007-11-28 10:11:14 +03:00
|
|
|
mca_btl_openib_send_control_frag_t* frag;
|
2006-03-26 12:30:50 +04:00
|
|
|
int rc;
|
|
|
|
|
2008-01-09 13:26:21 +03:00
|
|
|
frag = alloc_credit_frag(openib_btl);
|
2006-03-26 12:30:50 +04:00
|
|
|
if(NULL == frag) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2007-11-28 10:11:14 +03:00
|
|
|
to_base_frag(frag)->base.des_cbfunc =
|
|
|
|
mca_btl_openib_endpoint_eager_rdma_connect_cb;
|
|
|
|
to_base_frag(frag)->base.des_cbdata = NULL;
|
|
|
|
to_base_frag(frag)->base.des_flags |= MCA_BTL_DES_FLAGS_PRIORITY;
|
2007-11-28 10:13:34 +03:00
|
|
|
to_base_frag(frag)->base.order = mca_btl_openib_component.credits_qp;
|
2007-11-28 10:11:14 +03:00
|
|
|
to_base_frag(frag)->segment.seg_len =
|
|
|
|
sizeof(mca_btl_openib_eager_rdma_header_t);
|
|
|
|
to_com_frag(frag)->endpoint = endpoint;
|
2006-03-26 12:30:50 +04:00
|
|
|
|
|
|
|
frag->hdr->tag = MCA_BTL_TAG_BTL;
|
2007-11-28 10:11:14 +03:00
|
|
|
rdma_hdr = (mca_btl_openib_eager_rdma_header_t*)to_base_frag(frag)->segment.seg_addr.pval;
|
2006-03-26 12:30:50 +04:00
|
|
|
rdma_hdr->control.type = MCA_BTL_OPENIB_CONTROL_RDMA;
|
|
|
|
rdma_hdr->rkey = endpoint->eager_rdma_local.reg->mr->rkey;
|
2007-01-07 04:48:57 +03:00
|
|
|
rdma_hdr->rdma_start.lval = ompi_ptr_ptol(endpoint->eager_rdma_local.base.pval);
|
2008-01-21 15:11:18 +03:00
|
|
|
BTL_VERBOSE(("sending rkey %lu, rdma_start.lval %llu, pval %p, ival %u type %d and sizeof(rdma_hdr) %d\n",
|
|
|
|
rdma_hdr->rkey,
|
|
|
|
rdma_hdr->rdma_start.lval,
|
2007-01-13 02:14:45 +03:00
|
|
|
rdma_hdr->rdma_start.pval,
|
|
|
|
rdma_hdr->rdma_start.ival,
|
2008-01-21 15:11:18 +03:00
|
|
|
rdma_hdr->control.type,
|
2007-01-13 02:14:45 +03:00
|
|
|
sizeof(mca_btl_openib_eager_rdma_header_t)
|
|
|
|
));
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-01-13 02:14:45 +03:00
|
|
|
if(endpoint->nbo) {
|
|
|
|
BTL_OPENIB_EAGER_RDMA_CONTROL_HEADER_HTON((*rdma_hdr));
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
BTL_VERBOSE(("after HTON: sending rkey %lu, rdma_start.lval %llu, pval %p, ival %u\n",
|
|
|
|
rdma_hdr->rkey,
|
2007-01-13 02:14:45 +03:00
|
|
|
rdma_hdr->rdma_start.lval,
|
|
|
|
rdma_hdr->rdma_start.pval,
|
|
|
|
rdma_hdr->rdma_start.ival
|
|
|
|
));
|
|
|
|
}
|
2008-01-21 15:11:18 +03:00
|
|
|
rc = mca_btl_openib_endpoint_send(endpoint, frag);
|
2007-12-09 17:14:11 +03:00
|
|
|
if (OMPI_SUCCESS == rc ||OMPI_ERR_RESOURCE_BUSY == rc)
|
|
|
|
return OMPI_SUCCESS;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
2007-12-09 17:14:11 +03:00
|
|
|
MCA_BTL_IB_FRAG_RETURN(frag);
|
|
|
|
BTL_ERROR(("Error sending RDMA buffer", strerror(errno)));
|
|
|
|
return rc;
|
2006-03-26 12:30:50 +04:00
|
|
|
}
|
2006-12-19 11:34:48 +03:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
/* Setup eager RDMA buffers and notify the remote endpoint*/
|
2006-03-26 12:30:50 +04:00
|
|
|
void mca_btl_openib_endpoint_connect_eager_rdma(
|
|
|
|
mca_btl_openib_endpoint_t* endpoint)
|
|
|
|
{
|
|
|
|
mca_btl_openib_module_t* openib_btl = endpoint->endpoint_btl;
|
|
|
|
char *buf;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
mca_btl_openib_recv_frag_t *headers_buf;
|
2007-08-27 15:37:01 +04:00
|
|
|
int i;
|
2006-03-26 12:30:50 +04:00
|
|
|
|
2006-10-31 12:54:52 +03:00
|
|
|
/* Set local rdma pointer to 1 temporarily so other threads will not try
|
|
|
|
* to enter the function */
|
2007-01-05 01:07:37 +03:00
|
|
|
if(!opal_atomic_cmpset_ptr(&endpoint->eager_rdma_local.base.pval, NULL,
|
2006-10-31 12:54:52 +03:00
|
|
|
(void*)1))
|
|
|
|
return;
|
2006-03-26 12:30:50 +04:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
headers_buf = (mca_btl_openib_recv_frag_t*)
|
|
|
|
malloc(sizeof(mca_btl_openib_recv_frag_t) *
|
2007-03-05 17:17:50 +03:00
|
|
|
mca_btl_openib_component.eager_rdma_num);
|
|
|
|
|
|
|
|
if(NULL == headers_buf)
|
|
|
|
goto unlock_rdma_local;
|
|
|
|
|
2006-03-26 12:30:50 +04:00
|
|
|
buf = openib_btl->super.btl_mpool->mpool_alloc(openib_btl->super.btl_mpool,
|
2006-12-19 11:34:48 +03:00
|
|
|
openib_btl->eager_rdma_frag_size *
|
|
|
|
mca_btl_openib_component.eager_rdma_num,
|
|
|
|
mca_btl_openib_component.buffer_alignment,
|
2006-12-17 15:26:41 +03:00
|
|
|
MCA_MPOOL_FLAGS_CACHE_BYPASS,
|
2006-03-26 12:30:50 +04:00
|
|
|
(mca_mpool_base_registration_t**)&endpoint->eager_rdma_local.reg);
|
|
|
|
|
|
|
|
if(!buf)
|
2007-03-05 17:17:50 +03:00
|
|
|
goto free_headers_buf;
|
2006-03-26 12:30:50 +04:00
|
|
|
|
2006-10-31 10:33:35 +03:00
|
|
|
buf = buf + openib_btl->eager_rdma_frag_size -
|
|
|
|
sizeof(mca_btl_openib_footer_t) - openib_btl->super.btl_eager_limit -
|
2007-03-05 17:17:50 +03:00
|
|
|
sizeof(mca_btl_openib_header_t);
|
2006-08-28 15:03:56 +04:00
|
|
|
|
2006-03-26 12:30:50 +04:00
|
|
|
for(i = 0; i < mca_btl_openib_component.eager_rdma_num; i++) {
|
2007-03-05 17:17:50 +03:00
|
|
|
ompi_free_list_item_t *item;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
mca_btl_openib_recv_frag_t * frag;
|
|
|
|
mca_btl_openib_frag_init_data_t init_data;
|
|
|
|
|
2007-03-05 17:17:50 +03:00
|
|
|
item = (ompi_free_list_item_t*)&headers_buf[i];
|
|
|
|
item->registration = (void*)endpoint->eager_rdma_local.reg;
|
|
|
|
item->ptr = buf + i * openib_btl->eager_rdma_frag_size;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
OBJ_CONSTRUCT(item, mca_btl_openib_recv_frag_t);
|
|
|
|
|
2007-11-28 10:12:44 +03:00
|
|
|
init_data.order = MCA_BTL_NO_ORDER;
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
init_data.list = NULL;
|
2008-01-21 15:11:18 +03:00
|
|
|
|
|
|
|
mca_btl_openib_frag_init(item, &init_data);
|
2007-11-28 10:11:14 +03:00
|
|
|
frag = to_recv_frag(item);
|
|
|
|
to_base_frag(frag)->type = MCA_BTL_OPENIB_FRAG_EAGER_RDMA;
|
|
|
|
to_com_frag(frag)->endpoint = endpoint;
|
|
|
|
frag->ftr = (mca_btl_openib_footer_t*)
|
|
|
|
((char*)to_base_frag(frag)->segment.seg_addr.pval +
|
|
|
|
mca_btl_openib_component.eager_limit);
|
2008-01-21 15:11:18 +03:00
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
MCA_BTL_OPENIB_RDMA_MAKE_REMOTE(frag->ftr);
|
2006-03-26 12:30:50 +04:00
|
|
|
}
|
2007-03-05 17:17:50 +03:00
|
|
|
|
|
|
|
endpoint->eager_rdma_local.frags = headers_buf;
|
|
|
|
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
endpoint->eager_rdma_local.rd_win =
|
|
|
|
mca_btl_openib_component.eager_rdma_num >> 2;
|
|
|
|
endpoint->eager_rdma_local.rd_win =
|
|
|
|
endpoint->eager_rdma_local.rd_win?endpoint->eager_rdma_local.rd_win:1;
|
|
|
|
|
2006-10-31 12:54:52 +03:00
|
|
|
/* set local rdma pointer to real value */
|
2007-01-05 01:07:37 +03:00
|
|
|
opal_atomic_cmpset_ptr(&endpoint->eager_rdma_local.base.pval, (void*)1,
|
2006-10-31 12:54:52 +03:00
|
|
|
buf);
|
|
|
|
|
2007-12-09 17:14:11 +03:00
|
|
|
if(mca_btl_openib_endpoint_send_eager_rdma(endpoint) == OMPI_SUCCESS) {
|
2007-12-23 15:29:34 +03:00
|
|
|
mca_btl_openib_hca_t *hca = endpoint->endpoint_btl->hca;
|
|
|
|
mca_btl_openib_endpoint_t **p;
|
2007-05-09 01:47:21 +04:00
|
|
|
OBJ_RETAIN(endpoint);
|
This commit brings in two major things:
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
2007-07-18 05:15:59 +04:00
|
|
|
assert(((opal_object_t*)endpoint)->obj_reference_count == 2);
|
2007-12-23 15:29:34 +03:00
|
|
|
do {
|
|
|
|
p = &hca->eager_rdma_buffers[hca->eager_rdma_buffers_count];
|
|
|
|
} while(!opal_atomic_cmpset_ptr(p, NULL, endpoint));
|
|
|
|
|
2006-10-31 12:54:52 +03:00
|
|
|
/* from this point progress function starts to poll new buffer */
|
2007-12-23 15:29:34 +03:00
|
|
|
OPAL_THREAD_ADD32(&hca->eager_rdma_buffers_count, 1);
|
2006-03-26 19:02:43 +04:00
|
|
|
return;
|
2006-03-26 12:30:50 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
openib_btl->super.btl_mpool->mpool_free(openib_btl->super.btl_mpool,
|
2006-10-31 12:54:52 +03:00
|
|
|
buf, (mca_mpool_base_registration_t*)endpoint->eager_rdma_local.reg);
|
2007-03-05 17:17:50 +03:00
|
|
|
free_headers_buf:
|
|
|
|
free(headers_buf);
|
2006-03-26 19:02:43 +04:00
|
|
|
unlock_rdma_local:
|
2006-10-31 12:54:52 +03:00
|
|
|
/* set local rdma pointer back to zero. Will retry later */
|
2008-01-21 15:11:18 +03:00
|
|
|
opal_atomic_cmpset_ptr(&endpoint->eager_rdma_local.base.pval,
|
2007-01-05 01:07:37 +03:00
|
|
|
endpoint->eager_rdma_local.base.pval, NULL);
|
2007-03-05 17:17:50 +03:00
|
|
|
endpoint->eager_rdma_local.frags = NULL;
|
2006-03-26 12:30:50 +04:00
|
|
|
}
|