2005-03-14 23:57:21 +03:00
|
|
|
# -*- text -*-
|
|
|
|
#
|
2006-02-16 23:40:23 +03:00
|
|
|
# Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
|
2005-11-05 22:57:48 +03:00
|
|
|
# University Research and Technology
|
|
|
|
# Corporation. All rights reserved.
|
|
|
|
# Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
# of Tennessee Research Foundation. All rights
|
|
|
|
# reserved.
|
2005-03-14 23:57:21 +03:00
|
|
|
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
# University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
# Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
# All rights reserved.
|
2007-01-08 23:25:26 +03:00
|
|
|
# Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.
|
2005-03-14 23:57:21 +03:00
|
|
|
# $COPYRIGHT$
|
|
|
|
#
|
|
|
|
# Additional copyrights may follow
|
|
|
|
#
|
|
|
|
# $HEADER$
|
|
|
|
#
|
|
|
|
# This is the US/English general help file for Open RTE's orterun.
|
|
|
|
#
|
|
|
|
[orterun:init-failure]
|
|
|
|
Open RTE was unable to initialize properly. The error occured while
|
|
|
|
attempting to %s. Returned value %d instead of ORTE_SUCCESS.
|
|
|
|
[orterun:usage]
|
2006-06-22 23:48:27 +04:00
|
|
|
%s (%s) %s
|
|
|
|
|
2005-03-14 23:57:21 +03:00
|
|
|
Usage: %s [OPTION]... [PROGRAM]...
|
|
|
|
Start the given program using Open RTE
|
|
|
|
|
|
|
|
%s
|
2006-06-22 23:48:27 +04:00
|
|
|
|
|
|
|
Report bugs to %s
|
2006-06-09 21:21:23 +04:00
|
|
|
[orterun:version]
|
|
|
|
%s (%s) %s
|
2006-06-22 23:48:27 +04:00
|
|
|
|
|
|
|
Report bugs to %s
|
2005-03-14 23:57:21 +03:00
|
|
|
[orterun:allocate-resources]
|
|
|
|
%s was unable to allocate enough resources to start your application.
|
|
|
|
This might be a transient error (too many nodes in the cluster were
|
|
|
|
unavailable at the time of the request) or a permenant error (you
|
|
|
|
requsted more nodes than exist in your cluster).
|
|
|
|
|
|
|
|
While probably only useful to Open RTE developers, the error returned
|
|
|
|
was %d.
|
|
|
|
[orterun:error-spawning]
|
|
|
|
%s was unable to start the specified application. An attempt has been
|
|
|
|
made to clean up all processes that did start. The error returned was
|
|
|
|
%d.
|
|
|
|
[orterun:appfile-not-found]
|
2005-03-18 06:43:59 +03:00
|
|
|
Unable to open the appfile:
|
|
|
|
|
|
|
|
%s
|
2005-03-14 23:57:21 +03:00
|
|
|
|
|
|
|
Double check that this file exists and is readable.
|
2005-03-18 06:43:59 +03:00
|
|
|
[orterun:executable-not-specified]
|
|
|
|
No executable was specified on the %s command line.
|
|
|
|
|
|
|
|
Aborting.
|
2006-07-11 01:25:33 +04:00
|
|
|
[orterun:multi-apps-and-zero-np]
|
|
|
|
%s found multiple applications specified on the command line, with
|
|
|
|
at least one that failed to specify the number of processes to execute.
|
|
|
|
When specifying multiple applications, you must specify how many processes
|
|
|
|
of each to launch via the -np argument.
|
2005-05-10 21:14:53 +04:00
|
|
|
[orterun:nothing-to-do]
|
|
|
|
%s could not find anything to do.
|
|
|
|
|
2005-10-01 19:51:20 +04:00
|
|
|
It is possible that you forgot to specify how many processes to run
|
|
|
|
via the "-np" argument.
|
2006-02-16 23:40:23 +03:00
|
|
|
[orterun:call-failed]
|
|
|
|
%s encountered a %s call failure. This should not happen, and
|
2005-05-10 21:14:53 +04:00
|
|
|
usually indicates an error within the operating system itself.
|
|
|
|
Specifically, the following error occurred:
|
|
|
|
|
|
|
|
%s
|
|
|
|
|
|
|
|
The only other available information that may be helpful is the errno
|
|
|
|
that was returned: %d.
|
2005-07-29 01:17:48 +04:00
|
|
|
[orterun:environ]
|
|
|
|
%s was unable to set
|
|
|
|
%s = %s
|
|
|
|
in the environment. Returned value %d instead of ORTE_SUCCESS.
|
2006-11-01 01:16:51 +03:00
|
|
|
[orterun:precondition]
|
|
|
|
%s was unable to precondition transports
|
|
|
|
Returned value %d instead of ORTE_SUCCESS.
|
|
|
|
[orterun:attr-failed]
|
|
|
|
%s was unable to define an attribute
|
|
|
|
Returned value %d instead of ORTE_SUCCESS.
|
2007-04-24 23:19:14 +04:00
|
|
|
#
|
|
|
|
[orterun:proc-ordered-abort]
|
|
|
|
%s has exited due to process rank %lu with PID %lu on
|
2008-02-28 04:57:57 +03:00
|
|
|
node %s calling "abort". This may have caused other processes
|
2007-04-24 23:19:14 +04:00
|
|
|
in the application to be terminated by signals sent by %s
|
|
|
|
(as reported here).
|
|
|
|
#
|
2008-03-19 22:00:51 +03:00
|
|
|
[orterun:proc-exit-no-sync]
|
|
|
|
%s has exited due to process rank %lu with PID %lu on
|
|
|
|
node %s exiting without calling "finalize". This may
|
|
|
|
have caused other processes in the application to be
|
|
|
|
terminated by signals sent by %s (as reported here).
|
|
|
|
#
|
|
|
|
[orterun:proc-exit-no-sync-unknown]
|
|
|
|
%s has exited due to a process exiting without calling "finalize",
|
|
|
|
but has no info as to the process that caused that situation. This
|
|
|
|
may have caused other processes in the application to be
|
|
|
|
terminated by signals sent by %s (as reported here).
|
|
|
|
#
|
2005-07-29 01:17:48 +04:00
|
|
|
[orterun:proc-aborted]
|
2008-02-28 04:57:57 +03:00
|
|
|
%s noticed that process rank %lu with PID %lu on node %s exited on signal %d.
|
|
|
|
#
|
|
|
|
[orterun:proc-aborted-unknown]
|
|
|
|
%s noticed that the job aborted, but has no info as to the process
|
|
|
|
that caused that situation.
|
|
|
|
#
|
|
|
|
[orterun:proc-aborted-signal-unknown]
|
|
|
|
%s noticed that the job aborted by signal, but has no info as
|
|
|
|
to the process that caused that situation.
|
|
|
|
#
|
2006-12-17 23:01:11 +03:00
|
|
|
[orterun:proc-aborted-strsignal]
|
2008-02-28 04:57:57 +03:00
|
|
|
%s noticed that process rank %lu with PID %lu on node %s exited on signal %d (%s).
|
|
|
|
#
|
2005-07-29 01:17:48 +04:00
|
|
|
[orterun:abnormal-exit]
|
2007-01-31 02:03:13 +03:00
|
|
|
WARNING: %s has exited before it received notification that all
|
2005-08-27 00:36:11 +04:00
|
|
|
started processes had terminated. You should double check and ensure
|
|
|
|
that there are no runaway processes still executing.
|
2007-01-25 17:17:44 +03:00
|
|
|
#
|
2007-01-08 23:25:26 +03:00
|
|
|
[orterun:sigint-while-processing]
|
|
|
|
WARNING: %s is in the process of killing a job, but has detected an
|
|
|
|
interruption (probably control-C).
|
|
|
|
|
|
|
|
It is dangerous to interrupt %s while it is killing a job (proper
|
|
|
|
termination may not be guaranteed). Hit control-C again within 1
|
|
|
|
second if you really want to kill %s immediately.
|
2007-01-25 17:17:44 +03:00
|
|
|
#
|
2005-09-06 20:10:05 +04:00
|
|
|
[orterun:empty-prefix]
|
2005-09-06 20:57:11 +04:00
|
|
|
A prefix was supplied to %s that only contained slashes.
|
|
|
|
|
|
|
|
This is a fatal error; %s will now abort. No processes were launched.
|
2005-11-20 19:06:53 +03:00
|
|
|
#
|
|
|
|
[debugger-mca-param-not-found]
|
2007-08-04 04:35:55 +04:00
|
|
|
Internal error -- the orte_base_user_debugger MCA parameter was not able to
|
2005-11-20 19:06:53 +03:00
|
|
|
be found. Please contact the Open RTE developers; this should not
|
|
|
|
happen.
|
|
|
|
#
|
|
|
|
[debugger-orte_base_user_debugger-empty]
|
|
|
|
The MCA parameter "orte_base_user_debugger" was empty, indicating that
|
|
|
|
no user-level debuggers have been defined. Please set this MCA
|
|
|
|
parameter to a value and try again.
|
|
|
|
#
|
|
|
|
[debugger-not-found]
|
|
|
|
A suitable debugger could not be found in your PATH. Check the values
|
|
|
|
specified in the orte_base_user_debugger MCA parameter for the list of
|
|
|
|
debuggers that was searched.
|
|
|
|
#
|
|
|
|
[debugger-exec-failed]
|
|
|
|
%s was unable to launch the specified debugger. This is what was
|
|
|
|
launched:
|
|
|
|
|
|
|
|
%s
|
|
|
|
|
|
|
|
Things to check:
|
2005-10-05 14:24:34 +04:00
|
|
|
|
2005-11-20 19:06:53 +03:00
|
|
|
- Ensure that the debugger is installed properly
|
|
|
|
- Ensure that the "%s" executable is in your path
|
|
|
|
- Ensure that any required licenses are available to run the debugger
|
2006-09-15 01:29:51 +04:00
|
|
|
#
|
2007-04-24 23:19:14 +04:00
|
|
|
[orterun:sys-limit-pipe]
|
|
|
|
%s was unable to launch the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error: system limit exceeded on number of pipes that can be open
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
when attempting to start process rank %lu.
|
|
|
|
|
|
|
|
This can be resolved by either asking the system administrator for that node to
|
|
|
|
increase the system limit, or by rearranging your processes to place fewer of them
|
|
|
|
on that node.
|
|
|
|
#
|
|
|
|
[orterun:pipe-setup-failure]
|
|
|
|
%s was unable to launch the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error: pipe function call failed when setting up I/O forwarding subsystem
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:sys-limit-children]
|
|
|
|
%s was unable to launch the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error: system limit exceeded on number of processes that can be started
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
when attempting to start process rank %lu.
|
|
|
|
|
|
|
|
This can be resolved by either asking the system administrator for that node to
|
|
|
|
increase the system limit, or by rearranging your processes to place fewer of them
|
|
|
|
on that node.
|
|
|
|
#
|
|
|
|
[orterun:failed-term-attrs]
|
|
|
|
%s was unable to launch the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error: reading tty attributes function call failed while setting up I/O forwarding system
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:wdir-not-found]
|
|
|
|
%s was unable to launch the specified application as it could not change to the
|
|
|
|
specified working directory:
|
|
|
|
|
|
|
|
Working directory: %s
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:exe-not-found]
|
|
|
|
%s was unable to launch the specified application as it could not find an executable:
|
|
|
|
|
|
|
|
Executable: %s
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:exe-not-accessible]
|
|
|
|
%s was unable to launch the specified application as it could not access
|
|
|
|
or execute an executable:
|
|
|
|
|
|
|
|
Executable: %s
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:pipe-read-failure]
|
|
|
|
%s was unable to launch the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error: reading from a pipe function call failed while spawning a local process
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
while attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:proc-failed-to-start]
|
|
|
|
%s was unable to start the specified application as it encountered an error:
|
|
|
|
|
|
|
|
Error name: %s
|
|
|
|
Node: %s
|
|
|
|
|
|
|
|
when attempting to start process rank %lu.
|
|
|
|
#
|
|
|
|
[orterun:proc-failed-to-start-no-status]
|
|
|
|
%s was unable to start the specified application as it encountered an error
|
|
|
|
on node %s. More information may be available above.
|
2007-07-10 16:53:48 +04:00
|
|
|
#
|
2008-02-28 04:57:57 +03:00
|
|
|
[orterun:proc-failed-to-start-no-status-no-node]
|
|
|
|
%s was unable to start the specified application as it encountered an error.
|
|
|
|
More information may be available above.
|
|
|
|
#
|
2007-07-10 16:53:48 +04:00
|
|
|
[debugger requires -np]
|
|
|
|
The number of MPI processes to launch was not specified on the command
|
|
|
|
line.
|
|
|
|
|
|
|
|
The %s debugger requires that you specify a number of MPI processes to
|
|
|
|
launch on the command line via the "-np" command line parameter. For
|
|
|
|
example:
|
|
|
|
|
|
|
|
%s -np 4 %s
|
|
|
|
|
|
|
|
Skipping the %s debugger for now.
|
|
|
|
#
|
|
|
|
[debugger requires executable]
|
|
|
|
The %s debugger requires that you specify an executable on the %s
|
|
|
|
command line; you cannot specify application context files when
|
|
|
|
launching this job in the %s debugger. For example:
|
|
|
|
|
|
|
|
%s -np 4 my_mpi_executable
|
|
|
|
|
|
|
|
Skipping the %s debugger for now.
|
|
|
|
#
|
|
|
|
[debugger only accepts single app]
|
|
|
|
The %s debugger only accepts SPMD-style launching; specifying an
|
|
|
|
MPMD-style launch (with multiple applications separated via ':') is
|
|
|
|
not permitted.
|
2006-09-15 01:29:51 +04:00
|
|
|
|
2007-07-10 16:53:48 +04:00
|
|
|
Skipping the %s debugger for now.
|
When we can detect that a daemon has failed, then we would like to terminate the system without having it lock up. The "hang" is currently caused by the system attempting to send messages to the daemons (specifically, ordering them to kill their local procs and then terminate). Unfortunately, without some idea of which daemon has died, the system hangs while attempting to send a message to someone who is no longer alive.
This commit introduces the necessary logic to avoid that conflict. If a PLS component can identify that a daemon has failed, then we will set a flag indicating that fact. The xcast system will subsequently check that flag and, if it is set, will send all messages direct to the recipient. In the case of "kill local procs" and "terminate", the messages will go directly to each orted, thus bypassing any orted that has failed.
In addition, the xcast system will -not- wait for the messages to complete, but will return immediately (i.e., operate in non-blocking mode). Orterun will wait (via an event timer) for a period of time based on the number of daemons in the system to allow the messages to attempt to be delivered - at the end of that time, orterun will simply exit, alerting the user to the problem and -strongly- recommending they run orte-clean.
I could only test this on slurm for the case where all daemons unexpectedly died - srun apparently only executes its waitpid callback when all launched functions terminate. I have asked that Jeff integrate this capability into the OOB as he is working on it so that we execute it whenever a socket to an orted is unexpectedly closed. Meantime, the functionality will rarely get called, but at least the logic is available for anyone whose environment can support it.
This commit was SVN r16451.
2007-10-15 22:00:30 +04:00
|
|
|
#
|
|
|
|
[orterun:daemon-died-during-execution]
|
|
|
|
%s has detected that a required daemon terminated during execution
|
|
|
|
of the application with a non-zero status. This is a fatal error.
|
|
|
|
A best-effort attempt has been made to cleanup. However, it is
|
|
|
|
-strongly- recommended that you execute the orte-clean utility
|
|
|
|
to ensure full cleanup is accomplished.
|
2008-02-28 04:57:57 +03:00
|
|
|
#
|
|
|
|
[orterun:no-orted-object-exit]
|
|
|
|
%s was unable to determine the status of the daemons used to
|
|
|
|
launch this application. Additional manual cleanup may be required.
|
|
|
|
Please refer to the "orte-clean" tool for assistance.
|
|
|
|
#
|
|
|
|
[orterun:unclean-exit]
|
|
|
|
%s was unable to cleanly terminate the daemons on the nodes shown
|
|
|
|
below. Additional manual cleanup may be required - please refer to
|
|
|
|
the "orte-clean" tool for assistance.
|
|
|
|
#
|
|
|
|
[orterun:event-def-failed]
|
|
|
|
%s was unable to define an event required for proper operation of
|
|
|
|
the system. The reason for this error was:
|
|
|
|
|
|
|
|
Error: %s
|
|
|
|
|
|
|
|
Please report this to the Open MPI mailing list users@open-mpi.org.
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-filename-bad]
|
|
|
|
%s was unable to parse the filename where contact info for the
|
|
|
|
ompi-server was to be found. The option we were given was:
|
|
|
|
|
|
|
|
--ompi-server %s
|
|
|
|
|
|
|
|
This appears to be missing the required ':' following the
|
|
|
|
keyword "file". Please remember that the correct format for this
|
|
|
|
command line option is:
|
|
|
|
|
|
|
|
--ompi-server file:path-to-file
|
|
|
|
|
|
|
|
where path-to-file can be either relative to the cwd or absolute.
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-filename-missing]
|
|
|
|
%s was unable to parse the filename where contact info for the
|
|
|
|
ompi-server was to be found. The option we were given was:
|
|
|
|
|
|
|
|
--ompi-server %s
|
|
|
|
|
|
|
|
This appears to be missing a filename following the ':'. Please
|
|
|
|
remember that the correct format for this command line option is:
|
|
|
|
|
|
|
|
--ompi-server file:path-to-file
|
|
|
|
|
|
|
|
where path-to-file can be either relative to the cwd or absolute.
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-filename-access]
|
|
|
|
%s was unable to access the filename where contact info for the
|
|
|
|
ompi-server was to be found. The option we were given was:
|
|
|
|
|
|
|
|
--ompi-server %s
|
|
|
|
|
|
|
|
Please remember that the correct format for this command line option is:
|
|
|
|
|
|
|
|
--ompi-server file:path-to-file
|
|
|
|
|
|
|
|
where path-to-file can be either relative to the cwd or absolute, and that
|
|
|
|
you must have read access permissions to that file.
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-file-bad]
|
|
|
|
%s was unable to read the ompi-server's contact info from the
|
|
|
|
given filename. The filename we were given was:
|
|
|
|
|
|
|
|
FILE: %s
|
|
|
|
|
|
|
|
Please remember that the correct format for this command line option is:
|
|
|
|
|
|
|
|
--ompi-server file:path-to-file
|
|
|
|
|
|
|
|
where path-to-file can be either relative to the cwd or absolute, and that
|
|
|
|
the file must have a single line in it that contains the Open MPI
|
|
|
|
uri for the ompi-server. Note that this is *not* a standard uri, but
|
|
|
|
a special format used internally by Open MPI for communications. It can
|
|
|
|
best be generated by simply directing the ompi-server to put its
|
|
|
|
uri in a file, and then giving %s that filename.
|
2008-03-06 01:12:27 +03:00
|
|
|
[orterun:multiple-hostfiles]
|
|
|
|
Error: More than one hostfile was passed for a single application context, which
|
|
|
|
is not supported at this time.
|
2008-07-09 02:36:39 +04:00
|
|
|
#
|
|
|
|
[orterun:conflicting-params]
|
|
|
|
%s has detected multiple instances of an MCA param being specified on
|
|
|
|
the command line, with conflicting values:
|
|
|
|
|
|
|
|
MCA param: %s
|
|
|
|
Value 1: %s
|
|
|
|
Value 2: %s
|
|
|
|
|
|
|
|
This MCA param does not support multiple values, and the system is unable
|
|
|
|
to identify which value was intended. If this was done in error, please
|
|
|
|
re-issue the command with only one value. You may wish to review the
|
|
|
|
output from ompi_info for guidance on accepted values for this param.
|
|
|
|
|
Per the July technical meeting:
During the discussion of MPI-2 functionality, it was pointed out by Aurelien that there was an inherent race condition between startup of ompi-server and mpirun. Specifically, if someone started ompi-server to run in the background as part of a script, and then immediately executed mpirun, it was possible that an MPI proc could attempt to contact the server (or that mpirun could try to read the server's contact file before the server is running and ready.
At that time, we discussed createing a new tool "ompi-wait-server" that would wait for the server to be running, and/or probe to see if it is running and return true/false. However, rather than create yet another tool, it seemed just as effective to add the functionality to mpirun.
Thus, this commit creates two new mpirun cmd line flags (hey, you can never have too many!):
--wait-for-server : instructs mpirun to ping the server to see if it responds. This causes mpirun to execute an rml.ping to the server's URI with an appropriate timeout interval - if the ping isn't successful, mpirun attempts it again.
--server-wait-time xx : sets the ping timeout interval to xx seconds. Note that mpirun will attempt to ping the server twice with this timeout, so we actually wait for twice this time. Default is 10 seconds, which should be plenty of time.
This has only lightly been tested. It works if the server is present, and outputs a nice error message if it cannot be contacted. I have not tested the race condition case.
This commit was SVN r19152.
2008-08-05 00:29:50 +04:00
|
|
|
[orterun:server-not-found]
|
|
|
|
%s was instructed to wait for the requested ompi-server, but was unable to
|
|
|
|
establish contact with the server during the specified wait time:
|
2008-07-09 02:36:39 +04:00
|
|
|
|
Per the July technical meeting:
During the discussion of MPI-2 functionality, it was pointed out by Aurelien that there was an inherent race condition between startup of ompi-server and mpirun. Specifically, if someone started ompi-server to run in the background as part of a script, and then immediately executed mpirun, it was possible that an MPI proc could attempt to contact the server (or that mpirun could try to read the server's contact file before the server is running and ready.
At that time, we discussed createing a new tool "ompi-wait-server" that would wait for the server to be running, and/or probe to see if it is running and return true/false. However, rather than create yet another tool, it seemed just as effective to add the functionality to mpirun.
Thus, this commit creates two new mpirun cmd line flags (hey, you can never have too many!):
--wait-for-server : instructs mpirun to ping the server to see if it responds. This causes mpirun to execute an rml.ping to the server's URI with an appropriate timeout interval - if the ping isn't successful, mpirun attempts it again.
--server-wait-time xx : sets the ping timeout interval to xx seconds. Note that mpirun will attempt to ping the server twice with this timeout, so we actually wait for twice this time. Default is 10 seconds, which should be plenty of time.
This has only lightly been tested. It works if the server is present, and outputs a nice error message if it cannot be contacted. I have not tested the race condition case.
This commit was SVN r19152.
2008-08-05 00:29:50 +04:00
|
|
|
Server uri: %s
|
|
|
|
Timeout time: %ld
|
|
|
|
|
|
|
|
Error received: %s
|
|
|
|
|
|
|
|
Please check to ensure that the requested server matches the actual server
|
|
|
|
information, and that the server is in operation.
|
2008-12-10 20:10:39 +03:00
|
|
|
#
|
|
|
|
[orterun:ompi-server-pid-bad]
|
|
|
|
%s was unable to parse the PID of the %s to be used as the ompi-server.
|
|
|
|
The option we were given was:
|
|
|
|
|
|
|
|
--ompi-server %s
|
|
|
|
|
|
|
|
Please remember that the correct format for this command line option is:
|
|
|
|
|
|
|
|
--ompi-server PID:pid-of-%s
|
|
|
|
|
|
|
|
where PID can be either "PID" or "pid".
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-could-not-get-hnp-list]
|
|
|
|
%s was unable to search the list of local %s contact files to find the specified pid.
|
|
|
|
You might check to see if your local session directory is available and
|
|
|
|
that you have read permissions on the top of that directory tree.
|
|
|
|
#
|
|
|
|
[orterun:ompi-server-pid-not-found]
|
|
|
|
%s was unable to find an %s with the specified pid of %d that was to be used as the ompi-server.
|
|
|
|
The option we were given was:
|
|
|
|
|
|
|
|
--ompi-server %s
|
|
|
|
|
|
|
|
Please remember that the correct format for this command line option is:
|
|
|
|
|
|
|
|
--ompi-server PID:pid-of-%s
|
|
|
|
|
|
|
|
where PID can be either "PID" or "pid".
|
2008-12-24 18:27:46 +03:00
|
|
|
#
|
|
|
|
[orterun:write_file]
|
|
|
|
%s was unable to open a file to printout %s as requested. The file
|
|
|
|
name given was:
|
2008-12-10 20:10:39 +03:00
|
|
|
|
2008-12-24 18:27:46 +03:00
|
|
|
File: %s
|