2004-09-05 20:05:37 +04:00
|
|
|
# -*- text -*-
|
|
|
|
#
|
2005-11-05 22:57:48 +03:00
|
|
|
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
# University Research and Technology
|
|
|
|
# Corporation. All rights reserved.
|
|
|
|
# Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
# of Tennessee Research Foundation. All rights
|
|
|
|
# reserved.
|
2004-11-28 23:09:25 +03:00
|
|
|
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
# University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
# Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
# All rights reserved.
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
# Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
# $COPYRIGHT$
|
|
|
|
#
|
|
|
|
# Additional copyrights may follow
|
|
|
|
#
|
2004-09-05 20:05:37 +04:00
|
|
|
# $HEADER$
|
|
|
|
#
|
|
|
|
# This is the US/English general help file for Open MPI.
|
|
|
|
#
|
|
|
|
[mpi_init:startup:internal-failure]
|
|
|
|
It looks like %s failed for some reason; your parallel process is
|
|
|
|
likely to abort. There are many reasons that a parallel process can
|
|
|
|
fail during %s; some of which are due to configuration or environment
|
|
|
|
problems. This failure appears to be an internal failure; here's some
|
|
|
|
additional information (which may only be relevant to an Open MPI
|
|
|
|
developer):
|
|
|
|
|
|
|
|
%s
|
2005-10-18 00:47:44 +04:00
|
|
|
--> Returned "%s" (%d) instead of "Success" (0)
|
2010-03-14 03:09:55 +03:00
|
|
|
#
|
|
|
|
[mpi_init:startup:pml-add-procs-fail]
|
|
|
|
MPI_INIT has failed because at least one MPI process is unreachable
|
|
|
|
from another. This *usually* means that an underlying communication
|
|
|
|
plugin -- such as a BTL or an MTL -- has either not loaded or not
|
|
|
|
allowed itself to be used. Your MPI job will now abort.
|
|
|
|
|
|
|
|
You may wish to try to narrow down the problem;
|
|
|
|
|
|
|
|
* Check the output of ompi_info to see which BTL/MTL plugins are
|
|
|
|
available.
|
|
|
|
* Run your application with MPI_THREAD_SINGLE.
|
|
|
|
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
|
|
|
|
if using MTL-based communications) to see exactly which
|
|
|
|
communication plugins were considered and/or discarded.
|
|
|
|
#
|
2005-08-02 02:38:17 +04:00
|
|
|
[mpi-param-check-enabled-but-compiled-out]
|
|
|
|
WARNING: The MCA parameter mpi_param_check has been set to true, but
|
|
|
|
parameter checking has been compiled out of Open MPI. The
|
|
|
|
mpi_param_check value has therefore been ignored.
|
2007-01-21 17:02:06 +03:00
|
|
|
[mpi-params:leave-pinned-and-pipeline-selected]
|
|
|
|
WARNING: Cannot set both the MCA parameters mpi_leave_pinned and
|
|
|
|
mpi_leave_pinned_pipeline to "true". Defaulting to mpi_leave_pinned
|
|
|
|
ONLY.
|
2011-03-07 19:45:45 +03:00
|
|
|
#
|
2006-04-13 21:00:36 +04:00
|
|
|
[mpi_finalize:invoked_multiple_times]
|
|
|
|
The function MPI_FINALIZE was invoked multiple times in a single
|
|
|
|
process on host %s, PID %d.
|
|
|
|
|
|
|
|
This indicates an erroneous MPI program; MPI_FINALIZE is only allowed
|
|
|
|
to be invoked exactly once in a process.
|
2011-03-07 19:45:45 +03:00
|
|
|
#
|
2006-12-30 20:13:18 +03:00
|
|
|
[proc:heterogeneous-support-unavailable]
|
|
|
|
The build of Open MPI running on host %s was not
|
|
|
|
compiled with heterogeneous support. A process running on host
|
|
|
|
%s appears to have a different architecture,
|
|
|
|
which will not work. Please recompile Open MPI with the
|
|
|
|
configure option --enable-heterogeneous or use a homogeneous
|
|
|
|
environment.
|
2007-08-04 04:41:26 +04:00
|
|
|
#
|
|
|
|
[sparse groups enabled but compiled out]
|
|
|
|
WARNING: The MCA parameter mpi_use_sparse_group_storage has been set
|
|
|
|
to true, but sparse group support was not compiled into Open MPI. The
|
|
|
|
mpi_use_sparse_group_storage value has therefore been ignored.
|
2008-04-30 23:49:53 +04:00
|
|
|
#
|
|
|
|
[heterogeneous-support-unavailable]
|
|
|
|
This installation of Open MPI was configured without support for
|
|
|
|
heterogeneous architectures, but at least one node in the allocation
|
|
|
|
was detected to have a different architecture. The detected node was:
|
|
|
|
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
In order to operate in a heterogeneous environment, please reconfigure
|
|
|
|
Open MPI with --enable-heterogeneous.
|
2008-08-06 18:22:03 +04:00
|
|
|
#
|
|
|
|
[mpi_init:warn-fork]
|
2008-08-06 21:29:41 +04:00
|
|
|
An MPI process has executed an operation involving a call to the
|
|
|
|
"fork()" system call to create a child process. Open MPI is currently
|
|
|
|
operating in a condition that could result in memory corruption or
|
|
|
|
other system errors; your MPI job may hang, crash, or produce silent
|
|
|
|
data corruption. The use of fork() (or system() or other calls that
|
|
|
|
create child processes) is strongly discouraged.
|
2008-08-06 18:22:03 +04:00
|
|
|
|
2008-08-06 21:29:41 +04:00
|
|
|
The process that invoked fork was:
|
|
|
|
|
|
|
|
Local host: %s (PID %d)
|
|
|
|
MPI_COMM_WORLD rank: %d
|
|
|
|
|
|
|
|
If you are *absolutely sure* that your application will successfully
|
|
|
|
and correctly survive a call to fork(), you may disable this warning
|
|
|
|
by setting the mpi_warn_on_fork MCA parameter to 0.
|
2011-03-07 19:45:45 +03:00
|
|
|
#
|
|
|
|
[ompi mpi abort:cannot guarantee all killed]
|
|
|
|
An MPI process is aborting at a time when it cannot guarantee that all
|
|
|
|
of its peer processes in the job will be killed properly. You should
|
|
|
|
double check that everything has shut down cleanly.
|
|
|
|
|
|
|
|
Reason: %s
|
|
|
|
Local host: %s
|
|
|
|
PID: %d
|