552c9ca5a0
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
100 строки
4.0 KiB
Plaintext
100 строки
4.0 KiB
Plaintext
# -*- text -*-
|
|
#
|
|
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
# University Research and Technology
|
|
# Corporation. All rights reserved.
|
|
# Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
# of Tennessee Research Foundation. All rights
|
|
# reserved.
|
|
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
# University of Stuttgart. All rights reserved.
|
|
# Copyright (c) 2004-2005 The Regents of the University of California.
|
|
# All rights reserved.
|
|
# Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
|
|
# Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
|
|
# $COPYRIGHT$
|
|
#
|
|
# Additional copyrights may follow
|
|
#
|
|
# $HEADER$
|
|
#
|
|
# This is the US/English general help file for Open MPI.
|
|
#
|
|
[mpi_init:startup:internal-failure]
|
|
It looks like %s failed for some reason; your parallel process is
|
|
likely to abort. There are many reasons that a parallel process can
|
|
fail during %s; some of which are due to configuration or environment
|
|
problems. This failure appears to be an internal failure; here's some
|
|
additional information (which may only be relevant to an Open MPI
|
|
developer):
|
|
|
|
%s
|
|
--> Returned "%s" (%d) instead of "Success" (0)
|
|
#
|
|
[mpi_init:startup:pml-add-procs-fail]
|
|
MPI_INIT has failed because at least one MPI process is unreachable
|
|
from another. This *usually* means that an underlying communication
|
|
plugin -- such as a BTL or an MTL -- has either not loaded or not
|
|
allowed itself to be used. Your MPI job will now abort.
|
|
|
|
You may wish to try to narrow down the problem;
|
|
|
|
* Check the output of ompi_info to see which BTL/MTL plugins are
|
|
available.
|
|
* Run your application with MPI_THREAD_SINGLE.
|
|
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
|
|
if using MTL-based communications) to see exactly which
|
|
communication plugins were considered and/or discarded.
|
|
#
|
|
[mpi-param-check-enabled-but-compiled-out]
|
|
WARNING: The MCA parameter mpi_param_check has been set to true, but
|
|
parameter checking has been compiled out of Open MPI. The
|
|
mpi_param_check value has therefore been ignored.
|
|
[mpi-params:leave-pinned-and-pipeline-selected]
|
|
WARNING: Cannot set both the MCA parameters mpi_leave_pinned and
|
|
mpi_leave_pinned_pipeline to "true". Defaulting to mpi_leave_pinned
|
|
ONLY.
|
|
#
|
|
[mpi_finalize:invoked_multiple_times]
|
|
The function MPI_FINALIZE was invoked multiple times in a single
|
|
process on host %s, PID %d.
|
|
|
|
This indicates an erroneous MPI program; MPI_FINALIZE is only allowed
|
|
to be invoked exactly once in a process.
|
|
#
|
|
[proc:heterogeneous-support-unavailable]
|
|
The build of Open MPI running on host %s was not
|
|
compiled with heterogeneous support. A process running on host
|
|
%s appears to have a different architecture,
|
|
which will not work. Please recompile Open MPI with the
|
|
configure option --enable-heterogeneous or use a homogeneous
|
|
environment.
|
|
#
|
|
[sparse groups enabled but compiled out]
|
|
WARNING: The MCA parameter mpi_use_sparse_group_storage has been set
|
|
to true, but sparse group support was not compiled into Open MPI. The
|
|
mpi_use_sparse_group_storage value has therefore been ignored.
|
|
#
|
|
[heterogeneous-support-unavailable]
|
|
This installation of Open MPI was configured without support for
|
|
heterogeneous architectures, but at least one node in the allocation
|
|
was detected to have a different architecture. The detected node was:
|
|
|
|
Node: %s
|
|
|
|
In order to operate in a heterogeneous environment, please reconfigure
|
|
Open MPI with --enable-heterogeneous.
|
|
#
|
|
[ompi mpi abort:cannot guarantee all killed]
|
|
An MPI process is aborting at a time when it cannot guarantee that all
|
|
of its peer processes in the job will be killed properly. You should
|
|
double check that everything has shut down cleanly.
|
|
|
|
Reason: %s
|
|
Local host: %s
|
|
PID: %d
|
|
#
|
|
[no cuda support]
|
|
The user requested CUDA support with the --mca mpi_cuda_support 1 flag
|
|
but the library was not compiled with any support.
|