1
1
openmpi/ompi/runtime/help-mpi-runtime.txt
2015-06-23 20:59:57 -07:00

95 строки
3.8 KiB
Plaintext

# -*- text -*-
#
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
# University Research and Technology
# Corporation. All rights reserved.
# Copyright (c) 2004-2005 The University of Tennessee and The University
# of Tennessee Research Foundation. All rights
# reserved.
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
#
# $HEADER$
#
# This is the US/English general help file for Open MPI.
#
[mpi_init:startup:internal-failure]
It looks like %s failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during %s; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
%s
--> Returned "%s" (%d) instead of "Success" (0)
#
[mpi_init:startup:pml-add-procs-fail]
MPI_INIT has failed because at least one MPI process is unreachable
from another. This *usually* means that an underlying communication
plugin -- such as a BTL or an MTL -- has either not loaded or not
allowed itself to be used. Your MPI job will now abort.
You may wish to try to narrow down the problem;
* Check the output of ompi_info to see which BTL/MTL plugins are
available.
* Run your application with MPI_THREAD_SINGLE.
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
if using MTL-based communications) to see exactly which
communication plugins were considered and/or discarded.
#
[mpi-param-check-enabled-but-compiled-out]
WARNING: The MCA parameter mpi_param_check has been set to true, but
parameter checking has been compiled out of Open MPI. The
mpi_param_check value has therefore been ignored.
[mpi_finalize:invoked_multiple_times]
The function MPI_FINALIZE was invoked multiple times in a single
process on host %s, PID %d.
This indicates an erroneous MPI program; MPI_FINALIZE is only allowed
to be invoked exactly once in a process.
#
[proc:heterogeneous-support-unavailable]
The build of Open MPI running on host %s was not
compiled with heterogeneous support. A process running on host
%s appears to have a different architecture,
which will not work. Please recompile Open MPI with the
configure option --enable-heterogeneous or use a homogeneous
environment.
#
[sparse groups enabled but compiled out]
WARNING: The MCA parameter mpi_use_sparse_group_storage has been set
to true, but sparse group support was not compiled into Open MPI. The
mpi_use_sparse_group_storage value has therefore been ignored.
#
[heterogeneous-support-unavailable]
This installation of Open MPI was configured without support for
heterogeneous architectures, but at least one node in the allocation
was detected to have a different architecture. The detected node was:
Node: %s
In order to operate in a heterogeneous environment, please reconfigure
Open MPI with --enable-heterogeneous.
#
[ompi mpi abort:cannot guarantee all killed]
An MPI process is aborting at a time when it cannot guarantee that all
of its peer processes in the job will be killed properly. You should
double check that everything has shut down cleanly.
Reason: %s
Local host: %s
PID: %d
#
[no cuda support]
The user requested CUDA support with the --mca mpi_cuda_support 1 flag
but the library was not compiled with any support.