# -*- text -*- # # Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana # University Research and Technology # Corporation. All rights reserved. # Copyright (c) 2004-2005 The University of Tennessee and The University # of Tennessee Research Foundation. All rights # reserved. # Copyright (c) 2004-2005 High Performance Computing Center Stuttgart, # University of Stuttgart. All rights reserved. # Copyright (c) 2004-2005 The Regents of the University of California. # All rights reserved. # Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. # $COPYRIGHT$ # # Additional copyrights may follow # # $HEADER$ # # This is the US/English general help file for Open MPI. # [mpi_init:startup:internal-failure] It looks like %s failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during %s; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): %s --> Returned "%s" (%d) instead of "Success" (0) [mpi-param-check-enabled-but-compiled-out] WARNING: The MCA parameter mpi_param_check has been set to true, but parameter checking has been compiled out of Open MPI. The mpi_param_check value has therefore been ignored. [mpi-params:leave-pinned-and-pipeline-selected] WARNING: Cannot set both the MCA parameters mpi_leave_pinned and mpi_leave_pinned_pipeline to "true". Defaulting to mpi_leave_pinned ONLY. [mpi_init:startup:paffinity-unavailable] The MCA parameter "mpi_paffinity_alone" was set to a nonzero value, but Open MPI was unable to bind MPI_COMM_WORLD rank %s to a processor. Typical causes for this problem include: - A node was oversubscribed (more processes than processors), in which case Open MPI will not bind any processes on that node - A startup mechanism was used which did not tell Open MPI which processors to bind processes to [mpi_finalize:invoked_multiple_times] The function MPI_FINALIZE was invoked multiple times in a single process on host %s, PID %d. This indicates an erroneous MPI program; MPI_FINALIZE is only allowed to be invoked exactly once in a process. [proc:heterogeneous-support-unavailable] The build of Open MPI running on host %s was not compiled with heterogeneous support. A process running on host %s appears to have a different architecture, which will not work. Please recompile Open MPI with the configure option --enable-heterogeneous or use a homogeneous environment. # [sparse groups enabled but compiled out] WARNING: The MCA parameter mpi_use_sparse_group_storage has been set to true, but sparse group support was not compiled into Open MPI. The mpi_use_sparse_group_storage value has therefore been ignored. # [heterogeneous-support-unavailable] This installation of Open MPI was configured without support for heterogeneous architectures, but at least one node in the allocation was detected to have a different architecture. The detected node was: Node: %s In order to operate in a heterogeneous environment, please reconfigure Open MPI with --enable-heterogeneous.