2004-09-05 20:05:37 +04:00
|
|
|
# -*- text -*-
|
|
|
|
#
|
2005-11-05 22:57:48 +03:00
|
|
|
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
# University Research and Technology
|
|
|
|
# Corporation. All rights reserved.
|
|
|
|
# Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
# of Tennessee Research Foundation. All rights
|
|
|
|
# reserved.
|
2004-11-28 23:09:25 +03:00
|
|
|
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
# University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
# Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
# All rights reserved.
|
2010-03-14 03:09:55 +03:00
|
|
|
# Copyright (c) 2007-2010 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
# $COPYRIGHT$
|
|
|
|
#
|
|
|
|
# Additional copyrights may follow
|
|
|
|
#
|
2004-09-05 20:05:37 +04:00
|
|
|
# $HEADER$
|
|
|
|
#
|
|
|
|
# This is the US/English general help file for Open MPI.
|
|
|
|
#
|
|
|
|
[mpi_init:startup:internal-failure]
|
|
|
|
It looks like %s failed for some reason; your parallel process is
|
|
|
|
likely to abort. There are many reasons that a parallel process can
|
|
|
|
fail during %s; some of which are due to configuration or environment
|
|
|
|
problems. This failure appears to be an internal failure; here's some
|
|
|
|
additional information (which may only be relevant to an Open MPI
|
|
|
|
developer):
|
|
|
|
|
|
|
|
%s
|
2005-10-18 00:47:44 +04:00
|
|
|
--> Returned "%s" (%d) instead of "Success" (0)
|
2010-03-14 03:09:55 +03:00
|
|
|
#
|
|
|
|
[mpi_init:startup:pml-add-procs-fail]
|
|
|
|
|
|
|
|
MPI_INIT has failed because at least one MPI process is unreachable
|
|
|
|
from another. This *usually* means that an underlying communication
|
|
|
|
plugin -- such as a BTL or an MTL -- has either not loaded or not
|
|
|
|
allowed itself to be used. Your MPI job will now abort.
|
|
|
|
|
|
|
|
You may wish to try to narrow down the problem;
|
|
|
|
|
|
|
|
* Check the output of ompi_info to see which BTL/MTL plugins are
|
|
|
|
available.
|
|
|
|
* Run your application with MPI_THREAD_SINGLE.
|
|
|
|
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
|
|
|
|
if using MTL-based communications) to see exactly which
|
|
|
|
communication plugins were considered and/or discarded.
|
|
|
|
#
|
2005-08-02 02:38:17 +04:00
|
|
|
[mpi-param-check-enabled-but-compiled-out]
|
|
|
|
WARNING: The MCA parameter mpi_param_check has been set to true, but
|
|
|
|
parameter checking has been compiled out of Open MPI. The
|
|
|
|
mpi_param_check value has therefore been ignored.
|
2007-01-21 17:02:06 +03:00
|
|
|
[mpi-params:leave-pinned-and-pipeline-selected]
|
|
|
|
WARNING: Cannot set both the MCA parameters mpi_leave_pinned and
|
|
|
|
mpi_leave_pinned_pipeline to "true". Defaulting to mpi_leave_pinned
|
|
|
|
ONLY.
|
2005-08-26 00:24:22 +04:00
|
|
|
[mpi_init:startup:paffinity-unavailable]
|
2009-05-12 06:18:35 +04:00
|
|
|
The MCA parameter "opal_paffinity_alone" was set to a nonzero value,
|
2005-08-16 21:18:56 +04:00
|
|
|
but Open MPI was unable to bind MPI_COMM_WORLD rank %s to a processor.
|
|
|
|
|
|
|
|
Typical causes for this problem include:
|
|
|
|
|
|
|
|
- A node was oversubscribed (more processes than processors), in
|
|
|
|
which case Open MPI will not bind any processes on that node
|
|
|
|
- A startup mechanism was used which did not tell Open MPI which
|
|
|
|
processors to bind processes to
|
2006-04-13 21:00:36 +04:00
|
|
|
[mpi_finalize:invoked_multiple_times]
|
|
|
|
The function MPI_FINALIZE was invoked multiple times in a single
|
|
|
|
process on host %s, PID %d.
|
|
|
|
|
|
|
|
This indicates an erroneous MPI program; MPI_FINALIZE is only allowed
|
|
|
|
to be invoked exactly once in a process.
|
2006-12-30 20:13:18 +03:00
|
|
|
[proc:heterogeneous-support-unavailable]
|
|
|
|
The build of Open MPI running on host %s was not
|
|
|
|
compiled with heterogeneous support. A process running on host
|
|
|
|
%s appears to have a different architecture,
|
|
|
|
which will not work. Please recompile Open MPI with the
|
|
|
|
configure option --enable-heterogeneous or use a homogeneous
|
|
|
|
environment.
|
2007-08-04 04:41:26 +04:00
|
|
|
#
|
|
|
|
[sparse groups enabled but compiled out]
|
|
|
|
WARNING: The MCA parameter mpi_use_sparse_group_storage has been set
|
|
|
|
to true, but sparse group support was not compiled into Open MPI. The
|
|
|
|
mpi_use_sparse_group_storage value has therefore been ignored.
|
2008-04-30 23:49:53 +04:00
|
|
|
#
|
|
|
|
[heterogeneous-support-unavailable]
|
|
|
|
This installation of Open MPI was configured without support for
|
|
|
|
heterogeneous architectures, but at least one node in the allocation
|
|
|
|
was detected to have a different architecture. The detected node was:
|
|
|
|
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
In order to operate in a heterogeneous environment, please reconfigure
|
|
|
|
Open MPI with --enable-heterogeneous.
|
2008-08-06 18:22:03 +04:00
|
|
|
#
|
|
|
|
[mpi_init:warn-fork]
|
2008-08-06 21:29:41 +04:00
|
|
|
An MPI process has executed an operation involving a call to the
|
|
|
|
"fork()" system call to create a child process. Open MPI is currently
|
|
|
|
operating in a condition that could result in memory corruption or
|
|
|
|
other system errors; your MPI job may hang, crash, or produce silent
|
|
|
|
data corruption. The use of fork() (or system() or other calls that
|
|
|
|
create child processes) is strongly discouraged.
|
2008-08-06 18:22:03 +04:00
|
|
|
|
2008-08-06 21:29:41 +04:00
|
|
|
The process that invoked fork was:
|
|
|
|
|
|
|
|
Local host: %s (PID %d)
|
|
|
|
MPI_COMM_WORLD rank: %d
|
|
|
|
|
|
|
|
If you are *absolutely sure* that your application will successfully
|
|
|
|
and correctly survive a call to fork(), you may disable this warning
|
|
|
|
by setting the mpi_warn_on_fork MCA parameter to 0.
|