and ompi_free_list_init_new() and ompi_free_list_init_ex_new() were added.
Next step will be to start converting from ompi_free_list_init to()
ompi_free_list_init_new(), and then remove ompi_free_list_init(), and
rename ompi_free_list_init_new() back to ompi_free_list_init(). The merge
of the branch with the trunk was so substantial, it is far easeir to
re-implement the changes in the trunk, rather than trying to fix the bugs
the merge brought in ...
This commit was SVN r16630.
to fl_frag_size, fl_alignment is renamed to fl_frag_alignment, and
fl_payload_buffer_size and fl_payload_buffer_alignment are added.
This commit was SVN r16629.
fact a free_list_item so instead of having a struct, use typedef
to make them equivalent. Modify the parallel debuggers support
in order to allow them access to the internal types even when
we have an optimized build.
This commit was SVN r16567.
1. Galen's fine-grain control of queue pair resources in the openib
BTL.
1. Pasha's new implementation of asychronous HCA event handling.
Pasha's new implementation doesn't take much explanation, but the new
"multifrag" stuff does.
Note that "svn merge" was not used to bring this new code from the
/tmp/ib_multifrag branch -- something Bad happened in the periodic
trunk pulls on that branch making an actual merge back to the trunk
effectively impossible (i.e., lots and lots of arbitrary conflicts and
artifical changes). :-(
== Fine-grain control of queue pair resources ==
Galen's fine-grain control of queue pair resources to the OpenIB BTL
(thanks to Gleb for fixing broken code and providing additional
functionality, Pasha for finding broken code, and Jeff for doing all
the svn work and regression testing).
Prior to this commit, the OpenIB BTL created two queue pairs: one for
eager size fragments and one for max send size fragments. When the
use of the shared receive queue (SRQ) was specified (via "-mca
btl_openib_use_srq 1"), these QPs would use a shared receive queue for
receive buffers instead of the default per-peer (PP) receive queues
and buffers. One consequence of this design is that receive buffer
utilization (the size of the data received as a percentage of the
receive buffer used for the data) was quite poor for a number of
applications.
The new design allows multiple QPs to be specified at runtime. Each
QP can be setup to use PP or SRQ receive buffers as well as giving
fine-grained control over receive buffer size, number of receive
buffers to post, when to replenish the receive queue (low water mark)
and for SRQ QPs, the number of outstanding sends can also be
specified. The following is an example of the syntax to describe QPs
to the OpenIB BTL using the new MCA parameter btl_openib_receive_queues:
{{{
-mca btl_openib_receive_queues \
"P,128,16,4;S,1024,256,128,32;S,4096,256,128,32;S,65536,256,128,32"
}}}
Each QP description is delimited by ";" (semicolon) with individual
fields of the QP description delimited by "," (comma). The above
example therefore describes 4 QPs.
The first QP is:
P,128,16,4
Meaning: per-peer receive buffer QPs are indicated by a starting field
of "P"; the first QP (shown above) is therefore a per-peer based QP.
The second field indicates the size of the receive buffer in bytes
(128 bytes). The third field indicates the number of receive buffers
to allocate to the QP (16). The fourth field indicates the low
watermark for receive buffers at which time the BTL will repost
receive buffers to the QP (4).
The second QP is:
S,1024,256,128,32
Shared receive queue based QPs are indicated by a starting field of
"S"; the second QP (shown above) is therefore a shared receive queue
based QP. The second, third and fourth fields are the same as in the
per-peer based QP. The fifth field is the number of outstanding sends
that are allowed at a given time on the QP (32). This provides a
"good enough" mechanism of flow control for some regular communication
patterns.
QPs MUST be specified in ascending receive buffer size order. This
requirement may be removed prior to 1.3 release.
This commit was SVN r15474.
allocated from mpool memory (which is registered memory for RDMA transports)
This is not a problem for a small jobs, but for a big number of ranks an
amount of waisted memory is big.
This commit was SVN r13921.
Just follow inc_num and you will understand. Now _resize will grow the list to match
the required number of elements as described in the comment in the .h file.
This commit was SVN r12074.
constrained:
* Make sure we always have a number of eager fragments available
that scales with the number of processes communicating with
a given proc over shared memory
* Use FREE_LIST_GET instead of FREE_LIST_WAIT to return an
error to the PML when resource exhaustion occurs
* Don't dereference the frag during alloc unless we're sure
it's not NULL
Reviewed by: Galen
Refs trac:413
This commit was SVN r12053.
The following Trac tickets were found above:
Ticket 413 --> https://svn.open-mpi.org/trac/ompi/ticket/413
then use broadcast in order to wake them up. If there is only one then use signal
(which is supposed to be faster) and of course if there are no threads
waiting then just continue.
This commit was SVN r12049.
free list. It use the size attached to the free list, and the internal
memory segments to find out all the items allocated by this free list.
This commit was SVN r9669.
The free lst using atomic operations. I didn't want to completely
change the behavior, so we still use a mutex for the extreme cases (like
no more available items and we cannot allocate more). I test it for a
while on non multi-threading environment, but not enough on a multi-threaded
build.
This commit was SVN r9623.
- move files out of toplevel include/ and etc/, moving it into the
sub-projects
- rather than including config headers with <project>/include,
have them as <project>
- require all headers to be included with a project prefix, with
the exception of the config headers ({opal,orte,ompi}_config.h
mpi.h, and mpif.h)
This commit was SVN r8985.
- move mpool and allocator frameworks back to ompi (from opal)
- specialize the ompi_free_list class to use an mpool instance
- un-specialize opal_free_list to *not* use mpool; just use malloc/free
This commit was SVN r6292.