1
1
openmpi/opal/mca/mpool/base/help-mpool-base.txt
Ralph Castain 552c9ca5a0 George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT:    Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL

All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies.  This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP.  Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose.  UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs.  A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.

This commit was SVN r32317.
2014-07-26 00:47:28 +00:00

61 строка
1.7 KiB
Plaintext

# -*- text -*-
#
# Copyright (c) 2007-2009 Cisco Systems, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
#
# $HEADER$
#
[all mem leaks]
The following memory locations were allocated via MPI_ALLOC_MEM but
not freed via MPI_FREE_MEM before invoking MPI_FINALIZE:
Process ID: %s
Hostname: %s
PID: %d
%s
#
[some mem leaks]
The following memory locations were allocated via MPI_ALLOC_MEM but
not freed via MPI_FREE_MEM before invoking MPI_FINALIZE:
Process ID: %s
Hostname: %s
PID: %d
%s
%d additional leak%s recorded but %s not displayed here. Set the MCA
parameter mpi_show_mpi_alloc_mem_leaks to a larger number to see that
many leaks, or set it to a negative number to see all leaks.
#
[leave pinned failed]
A process attempted to use the "leave pinned" MPI feature, but no
memory registration hooks were found on the system at run time. This
may be the result of running on a system that does not support memory
hooks or having some other software subvert Open MPI's use of the
memory hooks. You can disable Open MPI's use of memory hooks by
setting both the mpi_leave_pinned and mpi_leave_pinned_pipeline MCA
parameters to 0.
Open MPI will disable any transports that are attempting to use the
leave pinned functionality; your job may still run, but may fall back
to a slower network transport (such as TCP).
Mpool name: %s
Process: %s
Local host: %s
#
[cannot deregister in-use memory]
Open MPI intercepted a call to free memory that is still being used by
an ongoing MPI communication. This usually reflects an error in the
MPI application; it may signify memory corruption. Open MPI will now
abort your job.
Mpool name: %s
Local host: %s
Buffer address: %p
Buffer size: %lu