1
1
openmpi/opal/mca/common/ofacm/help-mpi-common-ofacm-base.txt
Ralph Castain 552c9ca5a0 George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT:    Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL

All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies.  This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP.  Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose.  UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs.  A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.

This commit was SVN r32317.
2014-07-26 00:47:28 +00:00

42 строки
1.2 KiB
Plaintext

# -*- text -*-
#
# Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2009-2012 Oak Ridge National Laboratory. All rights reserved.
# Copyright (c) 2009-2012 Mellanox Technologies. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
#
# $HEADER$
#
# This is the US/English help file for Open MPI's OpenFabrics IB CPC
# support.
#
[no cpcs for port]
No OpenFabrics connection schemes reported that they were able to be
used on a specific port. As such, the openib BTL (OpenFabrics
support) will be disabled for this port.
Local host: %s
Local device: %s
CPCs attempted: %s
#
[cpc name not found]
An invalid CPC name was specified via the btl_openib_cpc_%s MCA
parameter.
Local host: %s
btl_openib_cpc_%s value: %s
Invalid name: %s
All possible valid names: %s
#
[inline truncated]
WARNING: The btl_openib_max_inline_data MCA parameter was used to
specify how much inline data should be used, but a device reduced this
value. This is not an error; it simply means that your run will use
a smaller inline data value than was requested.
Local host: %s
Requested value: %d
Value used by device: %d