17fcd72b5d
This commit was SVN r18716.
108 строки
4.0 KiB
Plaintext
108 строки
4.0 KiB
Plaintext
# -*- text -*-
|
|
#
|
|
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
# University Research and Technology
|
|
# Corporation. All rights reserved.
|
|
# Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
# of Tennessee Research Foundation. All rights
|
|
# reserved.
|
|
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
# University of Stuttgart. All rights reserved.
|
|
# Copyright (c) 2004-2005 The Regents of the University of California.
|
|
# All rights reserved.
|
|
# $COPYRIGHT$
|
|
#
|
|
# Additional copyrights may follow
|
|
#
|
|
# $HEADER$
|
|
#
|
|
# This is the US/English general help file for Open RTE's orterun.
|
|
#
|
|
[bproc-vexecmove-launch]
|
|
The bproc PLS component was not able to launch %s on node %d and therefore
|
|
cannot continue. Errno was set to %d.
|
|
[bproc-vexecmove-fork]
|
|
The bproc PLS component was not able to fork and therefore cannot continue.
|
|
Errno was set to %d.
|
|
[no-orted]
|
|
The bproc PLS component was not able to find the executable "%s" in the
|
|
current directory, your PATH, or in the directory where Open MPI was
|
|
initially installed, and therefore cannot continue.
|
|
|
|
For reference, we looked for
|
|
|
|
%s
|
|
|
|
Your current PATH is:
|
|
|
|
%s
|
|
|
|
We also looked for orted in the following directory:
|
|
|
|
%s
|
|
|
|
You may need to set your PATH properly, or set the MCA parameter
|
|
pls_bproc_orted to be the path to "orted".
|
|
[daemon-launch-number]
|
|
The bproc PLS component was not able to launch all the daemons on the remote
|
|
nodes and therefore cannot continue.
|
|
|
|
We attempted to launch %d daemons but only %d were actually launched.
|
|
|
|
For reference, we tried to launch %s
|
|
[daemon-launch-bad-pid]
|
|
The bproc PLS component was not able to launch all the daemons on the remote
|
|
nodes and therefore cannot continue.
|
|
|
|
On node %d the daemon pid was %d and errno was set to %d.
|
|
|
|
For reference, we tried to launch %s
|
|
[daemon-died-no-signal]
|
|
A daemon (pid %d) launched by the bproc PLS component on node %d died
|
|
unexpectedly so we are aborting.
|
|
|
|
This may be because the daemon was unable to find all the needed shared
|
|
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
|
|
location of the shared libraries on the remote nodes and this will
|
|
automatically be forwarded to the remote nodes.
|
|
[daemon-died-signal]
|
|
A daemon (pid %d) launched by the bproc PLS component on node %d died
|
|
unexpectedly on signal %d so we are aborting.
|
|
|
|
This may be because the daemon was unable to find all the needed shared
|
|
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
|
|
location of the shared libraries on the remote nodes and this will
|
|
automatically be forwarded to the remote nodes.
|
|
[proc-launch-number]
|
|
The bproc PLS component was not able to launch all the processes on the remote
|
|
nodes and therefore cannot continue.
|
|
|
|
We attempted to launch %d processes but only %d were actually launched.
|
|
|
|
For reference, we tried to launch %s
|
|
[proc-launch-bad-pid]
|
|
The bproc PLS component was not able to launch all the processes on the remote
|
|
nodes and therefore cannot continue.
|
|
|
|
On node %d the process pid was %d and errno was set to %d.
|
|
|
|
For reference, we tried to launch %s
|
|
|
|
[mismatched-slots]
|
|
The current bproc support requires that the number of available
|
|
slots on each node be the same. Note that this does -not- mean
|
|
that the number of processes you want to launch must be the same.
|
|
It only requires that you have access to the same number of process
|
|
slots on each node.
|
|
|
|
This is not something inherent to Open MPI, but rather a reported
|
|
characteristic of Bproc. We are in the process of confirming that
|
|
this requirement remains in effect. If we find that it has been
|
|
removed, then we will revise the system to support varying
|
|
numbers of slots on the allocated nodes.
|
|
|
|
Meantime, please revise your hostfile or other allocation so they
|
|
report the same number of process slots on each node. If you want
|
|
to force a particular mapping of numbers of processes to each node,
|
|
please use any of the other Open MPI mechanisms for doing so.
|