2004-08-14 05:56:05 +04:00
|
|
|
/*
|
2005-11-05 22:57:48 +03:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2004-11-28 23:09:25 +03:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2007-02-09 23:17:37 +03:00
|
|
|
* Copyright (c) 2007 Los Alamos National Security, LLC. All rights
|
|
|
|
* reserved.
|
2009-04-27 18:15:33 +04:00
|
|
|
* Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
|
2013-01-30 00:24:04 +04:00
|
|
|
* Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
|
As per the email discussion, revise the sparse handling of hostnames so that we avoid potential infinite loops while allowing large-scale users to improve their startup time:
* add a new MCA param orte_hostname_cutoff to specify the number of nodes at which we stop including hostnames. This defaults to INT_MAX => always include hostnames. If a value is given, then we will include hostnames for any allocation smaller than the given limit.
* remove ompi_proc_get_hostname. Replace all occurrences with a direct link to ompi_proc_t's proc_hostname, protected by appropriate "if NULL"
* modify the OMPI-ORTE integration component so that any call to modex_recv automatically loads the ompi_proc_t->proc_hostname field as well as returning the requested info. Thus, any process whose modex info you retrieve will automatically receive the hostname. Note that on-demand retrieval is still enabled - i.e., if we are running under direct launch with PMI, the hostname will be fetched upon first call to modex_recv, and then the ompi_proc_t->proc_hostname field will be loaded
* removed a stale MCA param "mpi_keep_peer_hostnames" that was no longer used anywhere in the code base
* added an envar lookup in ess/pmi for the number of nodes in the allocation. Sadly, PMI itself doesn't provide that info, so we have to get it a different way. Currently, we support PBS-based systems and SLURM - for any other, rank0 will emit a warning and we assume max number of daemons so we will always retain hostnames
This commit was SVN r29052.
2013-08-20 22:59:36 +04:00
|
|
|
* Copyright (c) 2013 Intel, Inc. All rights reserved
|
2004-11-22 04:38:40 +03:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
2004-08-14 05:56:05 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef OMPI_RUNTIME_PARAMS_H
|
|
|
|
#define OMPI_RUNTIME_PARAMS_H
|
2007-05-01 08:49:36 +04:00
|
|
|
|
2009-03-04 18:35:54 +03:00
|
|
|
#include "ompi_config.h"
|
|
|
|
|
2007-05-01 08:49:36 +04:00
|
|
|
BEGIN_C_DECLS
|
2004-08-14 05:56:05 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Global variables
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Whether or not to check the parameters of top-level MPI API
|
|
|
|
* functions or not.
|
|
|
|
*
|
|
|
|
* This variable should never be checked directly; the macro
|
|
|
|
* MPI_PARAM_CHECK should be used instead. This allows multiple
|
|
|
|
* levels of MPI function parameter checking:
|
|
|
|
*
|
|
|
|
* #- Disable all parameter checking at configure/compile time
|
|
|
|
* #- Enable all parameter checking at configure/compile time
|
|
|
|
* #- Disable all parameter checking at run time
|
|
|
|
* #- Enable all parameter checking at run time
|
|
|
|
*
|
|
|
|
* Hence, the MPI_PARAM_CHECK macro will either be "0", "1", or
|
|
|
|
* "ompi_mpi_param_check".
|
|
|
|
*/
|
2004-10-22 20:06:05 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_param_check;
|
2004-08-14 05:56:05 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Whether or not to check for MPI handle leaks during MPI_FINALIZE.
|
|
|
|
* If enabled, each MPI handle type will display a summary of the
|
|
|
|
* handles that are still allocated during MPI_FINALIZE.
|
|
|
|
*
|
|
|
|
* This is good debugging for user applications to find out if they
|
|
|
|
* are inadvertantly orphaning MPI handles.
|
|
|
|
*/
|
2004-10-22 20:06:05 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_debug_show_handle_leaks;
|
2004-08-14 05:56:05 +04:00
|
|
|
|
2007-08-02 01:33:25 +04:00
|
|
|
/**
|
|
|
|
* If > 0, show that many MPI_ALLOC_MEM leaks during MPI_FINALIZE. If
|
|
|
|
* enabled, memory that was returned via MPI_ALLOC_MEM but was never
|
|
|
|
* freed via MPI_FREE_MEM will be displayed during MPI_FINALIZE.
|
|
|
|
*
|
|
|
|
* This is good debugging for user applications to find out if they
|
|
|
|
* are inadvertantly orphaning MPI "special" memory.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern int ompi_debug_show_mpi_alloc_mem_leaks;
|
|
|
|
|
2004-08-14 05:56:05 +04:00
|
|
|
/**
|
|
|
|
* Whether or not to actually free MPI handles when their
|
|
|
|
* corresponding destructor is invoked. If enabled, Open MPI will not
|
|
|
|
* free handles, but will rather simply mark them as "freed". Any
|
|
|
|
* attempt to use them will result in an MPI exception.
|
|
|
|
*
|
|
|
|
* This is good debugging for user applications to find out if they
|
|
|
|
* are inadvertantly using MPI handles after they have been freed.
|
|
|
|
*/
|
2004-10-22 20:06:05 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_debug_no_free_handles;
|
2004-08-14 05:56:05 +04:00
|
|
|
|
2005-02-21 21:56:30 +03:00
|
|
|
/**
|
|
|
|
* Whether or not to print MCA parameters on MPI_INIT
|
|
|
|
*
|
|
|
|
* This is good debugging for user applications to see exactly which
|
|
|
|
* MCA parameters are being used in the current program execution.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_show_mca_params;
|
2004-08-14 05:56:05 +04:00
|
|
|
|
2005-02-21 21:56:30 +03:00
|
|
|
/**
|
|
|
|
* Whether or not to print the MCA parameters to a file or to stdout
|
|
|
|
*
|
|
|
|
* If this argument is set then it is used when parameters are dumped
|
|
|
|
* when the mpi_show_mca_params is set.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern char * ompi_mpi_show_mca_params_file;
|
|
|
|
|
2007-05-01 08:49:36 +04:00
|
|
|
/**
|
|
|
|
* Whether an MPI_ABORT should print out a stack trace or not.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_abort_print_stack;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Whether MPI_ABORT should print out an identifying message
|
|
|
|
* (e.g., hostname and PID) and loop waiting for a debugger to
|
|
|
|
* attach. The value of the integer is how many seconds to wait:
|
|
|
|
*
|
|
|
|
* 0 = do not print the message and do not loop
|
|
|
|
* negative value = print the message and loop forever
|
|
|
|
* positive value = print the message and delay for that many seconds
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern int ompi_mpi_abort_delay;
|
|
|
|
|
|
|
|
/**
|
2008-09-17 02:06:14 +04:00
|
|
|
* Whether to use the "leave pinned" protocol or not (0 = no, 1 = yes,
|
|
|
|
* -1 = determine at runtime).
|
2007-05-01 08:49:36 +04:00
|
|
|
*/
|
2008-09-17 02:06:14 +04:00
|
|
|
OMPI_DECLSPEC extern int ompi_mpi_leave_pinned;
|
2007-05-01 08:49:36 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Whether to use the "leave pinned pipeline" protocol or not.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_leave_pinned_pipeline;
|
|
|
|
|
2007-08-04 04:41:26 +04:00
|
|
|
/**
|
|
|
|
* Whether sparse MPI group storage formats are supported or not.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_have_sparse_group_storage;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Whether sparse MPI group storage formats should be used or not.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_use_sparse_group_storage;
|
|
|
|
|
2013-01-30 00:24:04 +04:00
|
|
|
/**
|
|
|
|
* Whether we want to enable CUDA GPU buffer send and receive support.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_cuda_support;
|
|
|
|
|
2013-08-22 07:40:26 +04:00
|
|
|
/*
|
|
|
|
* Cutoff point for retrieving hostnames
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern uint32_t ompi_hostname_cutoff;
|
|
|
|
|
2007-05-01 08:49:36 +04:00
|
|
|
/**
|
|
|
|
* Register MCA parameters used by the MPI layer.
|
|
|
|
*
|
|
|
|
* @returns OMPI_SUCCESS
|
|
|
|
*
|
|
|
|
* Registers several MCA parameters and initializes corresponding
|
|
|
|
* global variables to the values obtained from the MCA system.
|
|
|
|
*/
|
2005-02-10 22:08:35 +03:00
|
|
|
OMPI_DECLSPEC int ompi_mpi_register_params(void);
|
2005-02-21 21:56:30 +03:00
|
|
|
|
2007-05-01 08:49:36 +04:00
|
|
|
/**
|
|
|
|
* Display all MCA parameters used
|
|
|
|
*
|
|
|
|
* @returns OMPI_SUCCESS
|
|
|
|
*
|
|
|
|
* Displays in key = value format
|
|
|
|
*/
|
|
|
|
int ompi_show_all_mca_params(int32_t, int, char *);
|
|
|
|
|
|
|
|
END_C_DECLS
|
2004-08-14 05:56:05 +04:00
|
|
|
|
|
|
|
#endif /* OMPI_RUNTIME_PARAMS_H */
|