2014-03-27 19:51:06 +04:00
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
2006-01-11 08:02:15 +03:00
/*
* Copyright ( c ) 2004 - 2005 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation . All rights reserved .
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
* Copyright ( c ) 2004 - 2014 The University of Tennessee and The University
2006-01-11 08:02:15 +03:00
* of Tennessee Research Foundation . All rights
* reserved .
2015-06-24 06:59:57 +03:00
* Copyright ( c ) 2004 - 2005 High Performance Computing Center Stuttgart ,
2006-01-11 08:02:15 +03:00
* University of Stuttgart . All rights reserved .
* Copyright ( c ) 2004 - 2005 The Regents of the University of California .
* All rights reserved .
2006-11-22 05:06:52 +03:00
* Copyright ( c ) 2006 Los Alamos National Security , LLC . All rights
2015-06-24 06:59:57 +03:00
* reserved .
2015-06-06 20:13:09 +03:00
* Copyright ( c ) 2008 - 2015 Cisco Systems , Inc . All rights reserved .
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
* Copyright ( c ) 2009 Oak Ridge National Labs . All rights reserved .
2014-03-27 19:51:06 +04:00
* Copyright ( c ) 2010 - 2014 Los Alamos National Security , LLC .
2011-06-21 19:41:57 +04:00
* All rights reserved .
2014-03-04 20:14:46 +04:00
* Copyright ( c ) 2014 Hochschule Esslingen . All rights reserved .
2015-04-15 09:14:57 +03:00
* Copyright ( c ) 2015 Research Organization for Information Science
* and Technology ( RIST ) . All rights reserved .
2015-11-25 15:22:52 +03:00
* Copyright ( c ) 2015 Mellanox Technologies , Inc .
* All rights reserved .
2017-01-26 20:20:41 +03:00
* Copyright ( c ) 2017 IBM Corporation . All rights reserved .
2006-01-11 08:02:15 +03:00
* $ COPYRIGHT $
2015-06-24 06:59:57 +03:00
*
2006-01-11 08:02:15 +03:00
* Additional copyrights may follow
2015-06-24 06:59:57 +03:00
*
2006-01-11 08:02:15 +03:00
* $ HEADER $
*/
2006-02-12 04:33:29 +03:00
# include "opal_config.h"
2006-01-11 08:02:15 +03:00
# include <time.h>
# include <signal.h>
2006-02-12 04:33:29 +03:00
# include "opal/constants.h"
2006-01-11 08:02:15 +03:00
# include "opal/runtime/opal.h"
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
# include "opal/datatype/opal_datatype.h"
2013-03-28 01:09:41 +04:00
# include "opal/mca/base/mca_base_var.h"
2007-06-12 20:25:26 +04:00
# include "opal/threads/mutex.h"
2010-08-05 20:25:32 +04:00
# include "opal/threads/threads.h"
2013-03-28 01:09:41 +04:00
# include "opal/mca/shmem/base/base.h"
# include "opal/mca/base/mca_base_var.h"
# include "opal/runtime/opal_params.h"
# include "opal/dss/dss.h"
2016-05-07 14:12:01 +03:00
# include "opal/util/opal_environ.h"
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
# include "opal/util/show_help.h"
2014-09-15 22:00:46 +04:00
# include "opal/util/timings.h"
2013-03-28 01:09:41 +04:00
char * opal_signal_string = NULL ;
2017-01-26 20:20:41 +03:00
char * opal_stacktrace_output_filename = NULL ;
2013-03-28 01:09:41 +04:00
char * opal_net_private_ipv4 = NULL ;
2013-04-03 22:57:53 +04:00
char * opal_set_max_sys_limits = NULL ;
2006-01-11 08:02:15 +03:00
2014-09-15 22:00:46 +04:00
# if OPAL_ENABLE_TIMING
2014-09-23 16:59:54 +04:00
char * opal_timing_sync_file = NULL ;
char * opal_timing_output = NULL ;
bool opal_timing_overhead = true ;
2014-09-15 22:00:46 +04:00
# endif
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
bool opal_built_with_cuda_support = OPAL_INT_TO_BOOL ( OPAL_CUDA_SUPPORT ) ;
2015-04-15 09:14:57 +03:00
bool opal_cuda_support = false ;
2014-03-04 20:14:46 +04:00
# if OPAL_ENABLE_FT_CR == 1
bool opal_base_distill_checkpoint_ready = false ;
# endif
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
/**
* Globals imported from the OMPI layer .
*/
int opal_leave_pinned = - 1 ;
bool opal_leave_pinned_pipeline = false ;
2015-11-25 15:22:52 +03:00
bool opal_abort_print_stack = false ;
int opal_abort_delay = 0 ;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
2014-07-15 09:20:26 +04:00
static bool opal_register_done = false ;
2006-01-11 08:02:15 +03:00
int opal_register_params ( void )
{
Brice Goglin noticed that mpi_paffinity_alone didn't seem to be doing
anything for non-MPI apps. Oops! (But before you freak out, gentle
reader, note that mpi_paffinity_alone for MPI apps still worked fine)
When we made the switchover somewhere in the 1.3 series to have the
orted's do processor binding, then stuff like:
mpirun --mca mpi_paffinity_alone 1 hostname
should have bound hostname to processor 0. But it didn't because of a
subtle startup ordering issue: the MCA param registration for
opal_paffinity_alone was in the paffinity base (vs. being in
opal/runtime/opal_params.c), but it didn't actually get registered
until after the global variable opal_paffinity_alone was checked to
see if we wanted old-style affinity bindings. Oops.
However, for MPI apps, even though the orted didn't do the binding,
ompi_mpi_init() would notice that opal_paffinity_alone was set, yet
the process didn't seem to be bound. So the MPI process would bind
itself (this was done to support the running-without-orteds
scenarios). Hence, MPI apps still obeyed mpi_paffinity_alone
semantics.
But note that the error described above caused the new mpirun switch
--report-bindings to not work with mpi_paffinity_alone=1, meaning that
the orted would not report the bindings when mpi_paffinity_alone was
set to 1 (it ''did'' correctly report bindings if you used
--bind-to-core or one of the other binding options).
This commit separates out the paffinity base MCA param registration
into a small function that can be called at the Right place during the
startup sequence.
This commit was SVN r22602.
2010-02-11 01:32:00 +03:00
int ret ;
2017-01-26 20:20:41 +03:00
char * string = NULL ;
2013-03-28 01:09:41 +04:00
if ( opal_register_done ) {
return OPAL_SUCCESS ;
}
opal_register_done = true ;
Brice Goglin noticed that mpi_paffinity_alone didn't seem to be doing
anything for non-MPI apps. Oops! (But before you freak out, gentle
reader, note that mpi_paffinity_alone for MPI apps still worked fine)
When we made the switchover somewhere in the 1.3 series to have the
orted's do processor binding, then stuff like:
mpirun --mca mpi_paffinity_alone 1 hostname
should have bound hostname to processor 0. But it didn't because of a
subtle startup ordering issue: the MCA param registration for
opal_paffinity_alone was in the paffinity base (vs. being in
opal/runtime/opal_params.c), but it didn't actually get registered
until after the global variable opal_paffinity_alone was checked to
see if we wanted old-style affinity bindings. Oops.
However, for MPI apps, even though the orted didn't do the binding,
ompi_mpi_init() would notice that opal_paffinity_alone was set, yet
the process didn't seem to be bound. So the MPI process would bind
itself (this was done to support the running-without-orteds
scenarios). Hence, MPI apps still obeyed mpi_paffinity_alone
semantics.
But note that the error described above caused the new mpirun switch
--report-bindings to not work with mpi_paffinity_alone=1, meaning that
the orted would not report the bindings when mpi_paffinity_alone was
set to 1 (it ''did'' correctly report bindings if you used
--bind-to-core or one of the other binding options).
This commit separates out the paffinity base MCA param registration
into a small function that can be called at the Right place during the
startup sequence.
This commit was SVN r22602.
2010-02-11 01:32:00 +03:00
2006-01-11 08:02:15 +03:00
/*
* This string is going to be used in opal / util / stacktrace . c
*/
{
int j ;
int signals [ ] = {
# ifdef SIGABRT
SIGABRT ,
# endif
# ifdef SIGBUS
SIGBUS ,
# endif
# ifdef SIGFPE
SIGFPE ,
# endif
# ifdef SIGSEGV
SIGSEGV ,
# endif
- 1
} ;
for ( j = 0 ; signals [ j ] ! = - 1 ; + + j ) {
if ( j = = 0 ) {
asprintf ( & string , " %d " , signals [ j ] ) ;
} else {
char * tmp ;
asprintf ( & tmp , " %s,%d " , string , signals [ j ] ) ;
free ( string ) ;
string = tmp ;
}
}
2014-07-15 09:20:26 +04:00
opal_signal_string = string ;
ret = mca_base_var_register ( " opal " , " opal " , NULL , " signal " ,
2013-03-28 01:09:41 +04:00
" Comma-delimited list of integer signal numbers to Open MPI to attempt to intercept. Upon receipt of the intercepted signal, Open MPI will display a stack trace and abort. Open MPI will *not* replace signals if handlers are already installed by the time MPI_INIT is invoked. Optionally append \" :complain \" to any signal number in the comma-delimited list to make Open MPI complain if it detects another signal handler (and therefore does not insert its own). " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_3 , MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_signal_string ) ;
free ( string ) ;
2014-07-15 09:20:26 +04:00
if ( 0 > ret ) {
return ret ;
}
2006-01-11 08:02:15 +03:00
}
2017-01-26 20:20:41 +03:00
/*
* Where should the stack trace output be directed
* This string is going to be used in opal / util / stacktrace . c
*/
string = strdup ( " stderr " ) ;
opal_stacktrace_output_filename = string ;
ret = mca_base_var_register ( " opal " , " opal " , NULL , " stacktrace_output " ,
" Specifies where the stack trace output stream goes. "
" Accepts one of the following: none (disabled), stderr (default), stdout, file[:filename]. "
" If 'filename' is not specified, a default filename of 'stacktrace' is used. "
" The 'filename' is appended with either '.PID' or '.RANK.PID', if RANK is available. "
" The 'filename' can be an absolute path or a relative path to the current working directory. " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_3 ,
MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_stacktrace_output_filename ) ;
free ( string ) ;
if ( 0 > ret ) {
return ret ;
}
2014-03-27 19:51:06 +04:00
# if defined(HAVE_SCHED_YIELD)
opal_progress_yield_when_idle = false ;
ret = mca_base_var_register ( " opal " , " opal " , " progress " , " yield_when_idle " ,
" Yield the processor when waiting on progress " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_8 , MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_progress_yield_when_idle ) ;
# endif
2009-05-07 00:11:28 +04:00
# if OPAL_ENABLE_DEBUG
2013-03-28 01:09:41 +04:00
opal_progress_debug = false ;
ret = mca_base_var_register ( " opal " , " opal " , " progress " , " debug " ,
" Set to non-zero to debug progress engine features " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_8 , MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_progress_debug ) ;
if ( 0 > ret ) {
2014-07-15 09:20:26 +04:00
return ret ;
2013-03-28 01:09:41 +04:00
}
2007-06-12 20:25:26 +04:00
2013-03-28 01:09:41 +04:00
opal_debug_threads = false ;
ret = mca_base_var_register ( " opal " , " opal " , " debug " , " threads " ,
" Debug thread usage within OPAL. Reports out "
" when threads are acquired and released. " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_8 , MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_debug_threads ) ;
if ( 0 > ret ) {
2014-07-15 09:20:26 +04:00
return ret ;
2007-06-12 20:25:26 +04:00
}
2006-11-22 05:06:52 +03:00
# endif
2013-03-28 01:09:41 +04:00
2014-03-04 20:14:46 +04:00
# if OPAL_ENABLE_FT_CR == 1
opal_base_distill_checkpoint_ready = false ;
ret = mca_base_var_register ( " opal " , " opal " , " base " , " distill_checkpoint_ready " ,
" Distill only those components that are Checkpoint Ready " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_8 , MCA_BASE_VAR_SCOPE_LOCAL ,
& opal_base_distill_checkpoint_ready ) ;
if ( 0 > ret ) {
return ret ;
}
# endif
2013-03-28 01:09:41 +04:00
/* RFC1918 defines
- 10.0 .0 . / 8
- 172.16 .0 .0 / 12
- 192.168 .0 .0 / 16
2015-06-24 06:59:57 +03:00
2015-06-06 20:13:09 +03:00
RFC3330 also mentions
2013-03-28 01:09:41 +04:00
- 169.254 .0 .0 / 16 for DHCP onlink iff there ' s no DHCP server
*/
opal_net_private_ipv4 = " 10.0.0.0/8;172.16.0.0/12;192.168.0.0/16;169.254.0.0/16 " ;
ret = mca_base_var_register ( " opal " , " opal " , " net " , " private_ipv4 " ,
" Semicolon-delimited list of CIDR notation entries specifying what networks are considered \" private \" (default value based on RFC1918 and RFC3330) " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_3 , MCA_BASE_VAR_SCOPE_ALL_EQ ,
& opal_net_private_ipv4 ) ;
if ( 0 > ret ) {
2014-07-15 09:20:26 +04:00
return ret ;
2013-03-28 01:09:41 +04:00
}
2013-04-03 22:57:53 +04:00
opal_set_max_sys_limits = NULL ;
2013-03-28 01:09:41 +04:00
ret = mca_base_var_register ( " opal " , " opal " , NULL , " set_max_sys_limits " ,
2013-04-03 22:57:53 +04:00
" Set the specified system-imposed limits to the specified value, including \" unlimited \" . "
" Supported params: core, filesize, maxmem, openfiles, stacksize, maxchildren " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
2013-03-28 01:09:41 +04:00
OPAL_INFO_LVL_3 , MCA_BASE_VAR_SCOPE_ALL_EQ ,
& opal_set_max_sys_limits ) ;
if ( 0 > ret ) {
2014-07-15 09:20:26 +04:00
return ret ;
2013-03-28 01:09:41 +04:00
}
2015-05-12 18:52:51 +03:00
ret = mca_base_var_register ( " opal " , " opal " , NULL , " built_with_cuda_support " ,
" Whether CUDA GPU buffer support is built into library or not " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_DEFAULT_ONLY ,
OPAL_INFO_LVL_4 , MCA_BASE_VAR_SCOPE_CONSTANT ,
& opal_built_with_cuda_support ) ;
if ( 0 > ret ) {
return ret ;
}
/* Current default is to enable CUDA support if it is built into library */
opal_cuda_support = opal_built_with_cuda_support ;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
ret = mca_base_var_register ( " opal " , " opal " , NULL , " cuda_support " ,
" Whether CUDA GPU buffer support is enabled or not " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , MCA_BASE_VAR_FLAG_SETTABLE ,
OPAL_INFO_LVL_3 , MCA_BASE_VAR_SCOPE_ALL_EQ ,
& opal_cuda_support ) ;
2015-05-12 18:52:51 +03:00
if ( 0 > ret ) {
return ret ;
}
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
/* Leave pinned parameter */
opal_leave_pinned = - 1 ;
ret = mca_base_var_register ( " ompi " , " mpi " , NULL , " leave_pinned " ,
" Whether to use the \" leave pinned \" protocol or not. Enabling this setting can help bandwidth performance when repeatedly sending and receiving large messages with the same buffers over RDMA-based networks (0 = do not use \" leave pinned \" protocol, 1 = use \" leave pinned \" protocol, -1 = allow network to choose at runtime). " ,
MCA_BASE_VAR_TYPE_INT , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 ,
MCA_BASE_VAR_SCOPE_READONLY ,
& opal_leave_pinned ) ;
mca_base_var_register_synonym ( ret , " opal " , " opal " , NULL , " leave_pinned " ,
MCA_BASE_VAR_SYN_FLAG_DEPRECATED ) ;
opal_leave_pinned_pipeline = false ;
ret = mca_base_var_register ( " ompi " , " mpi " , NULL , " leave_pinned_pipeline " ,
" Whether to use the \" leave pinned pipeline \" protocol or not. " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 ,
MCA_BASE_VAR_SCOPE_READONLY ,
& opal_leave_pinned_pipeline ) ;
mca_base_var_register_synonym ( ret , " opal " , " opal " , NULL , " leave_pinned_pipeline " ,
MCA_BASE_VAR_SYN_FLAG_DEPRECATED ) ;
if ( opal_leave_pinned > 0 & & opal_leave_pinned_pipeline ) {
opal_leave_pinned_pipeline = 0 ;
2014-08-16 15:54:41 +04:00
opal_show_help ( " help-opal-runtime.txt " ,
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
" mpi-params:leave-pinned-and-pipeline-selected " ,
true ) ;
}
2014-09-15 22:00:46 +04:00
# if OPAL_ENABLE_TIMING
2015-04-10 00:25:58 +03:00
opal_timing_sync_file = NULL ;
2014-09-23 16:59:54 +04:00
( void ) mca_base_var_register ( " opal " , " opal " , NULL , " timing_sync_file " ,
" Clock synchronisation information generated by mpisync tool. You don't need to touch this if you use mpirun_prof tool. " ,
2014-09-15 22:00:46 +04:00
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 , MCA_BASE_VAR_SCOPE_ALL ,
2014-09-23 16:59:54 +04:00
& opal_timing_sync_file ) ;
if ( opal_timing_clocksync_read ( opal_timing_sync_file ) ) {
opal_output ( 0 , " Cannot read file %s containing clock synchronisation information \n " , opal_timing_sync_file ) ;
2014-09-15 22:00:46 +04:00
}
2015-04-10 00:25:58 +03:00
opal_timing_output = NULL ;
2014-09-23 16:59:54 +04:00
( void ) mca_base_var_register ( " opal " , " opal " , NULL , " timing_output " ,
" The name of output file for timing information. If this parameter is not set then output will be directed into OPAL debug channel. " ,
2014-09-15 22:00:46 +04:00
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 , MCA_BASE_VAR_SCOPE_ALL ,
2014-09-23 16:59:54 +04:00
& opal_timing_output ) ;
2014-09-15 22:00:46 +04:00
2015-04-10 00:25:58 +03:00
opal_timing_overhead = true ;
2014-09-15 22:00:46 +04:00
( void ) mca_base_var_register ( " opal " , " opal " , NULL , " timing_overhead " ,
2014-09-23 16:59:54 +04:00
" Timing framework introduce additional overhead (malloc's mostly). "
" The time spend in such costly routines is measured and may be accounted "
" (subtracted from timestamps). 'true' means consider overhead, 'false' - ignore (default: true). " ,
2014-09-15 22:00:46 +04:00
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 , MCA_BASE_VAR_SCOPE_ALL ,
2014-09-23 16:59:54 +04:00
& opal_timing_overhead ) ;
2014-09-15 22:00:46 +04:00
# endif
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
opal_warn_on_fork = true ;
( void ) mca_base_var_register ( " ompi " , " mpi " , NULL , " warn_on_fork " ,
" If nonzero, issue a warning if program forks under conditions that could cause system errors " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 , 0 ,
OPAL_INFO_LVL_9 ,
MCA_BASE_VAR_SCOPE_READONLY ,
& opal_warn_on_fork ) ;
2014-06-01 08:28:17 +04:00
2015-11-25 15:22:52 +03:00
opal_abort_delay = 0 ;
ret = mca_base_var_register ( " opal " , " opal " , NULL , " abort_delay " ,
" If nonzero, print out an identifying message when abort operation is invoked (hostname, PID of the process that called abort) and delay for that many seconds before exiting (a negative delay value means to never abort). This allows attaching of a debugger before quitting the job. " ,
MCA_BASE_VAR_TYPE_INT , NULL , 0 , 0 ,
OPAL_INFO_LVL_5 ,
MCA_BASE_VAR_SCOPE_READONLY ,
& opal_abort_delay ) ;
if ( 0 > ret ) {
2016-05-07 14:12:01 +03:00
return ret ;
2015-11-25 15:22:52 +03:00
}
opal_abort_print_stack = false ;
ret = mca_base_var_register ( " opal " , " opal " , NULL , " abort_print_stack " ,
" If nonzero, print out a stack trace when abort is invoked " ,
MCA_BASE_VAR_TYPE_BOOL , NULL , 0 ,
/* If we do not have stack trace
capability , make this a constant
MCA variable */
# if OPAL_WANT_PRETTY_PRINT_STACKTRACE
0 ,
OPAL_INFO_LVL_5 ,
MCA_BASE_VAR_SCOPE_READONLY ,
# else
MCA_BASE_VAR_FLAG_DEFAULT_ONLY ,
OPAL_INFO_LVL_5 ,
MCA_BASE_VAR_SCOPE_CONSTANT ,
# endif
& opal_abort_print_stack ) ;
if ( 0 > ret ) {
2016-05-07 14:12:01 +03:00
return ret ;
}
/* register the envar-forwarding params */
( void ) mca_base_var_register ( " opal " , " mca " , " base " , " env_list " ,
" Set SHELL env variables " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , 0 , OPAL_INFO_LVL_3 ,
MCA_BASE_VAR_SCOPE_READONLY , & mca_base_env_list ) ;
mca_base_env_list_sep = MCA_BASE_ENV_LIST_SEP_DEFAULT ;
( void ) mca_base_var_register ( " opal " , " mca " , " base " , " env_list_delimiter " ,
" Set SHELL env variables delimiter. Default: semicolon ';' " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , 0 , OPAL_INFO_LVL_3 ,
MCA_BASE_VAR_SCOPE_READONLY , & mca_base_env_list_sep ) ;
/* Set OMPI_MCA_mca_base_env_list variable, it might not be set before
* if mca variable was taken from amca conf file . Need to set it
* here because mca_base_var_process_env_list is called from schizo_ompi . c
* only when this env variable was set .
*/
if ( NULL ! = mca_base_env_list ) {
char * name = NULL ;
( void ) mca_base_var_env_name ( " mca_base_env_list " , & name ) ;
if ( NULL ! = name ) {
opal_setenv ( name , mca_base_env_list , false , & environ ) ;
free ( name ) ;
}
2015-11-25 15:22:52 +03:00
}
2016-05-07 14:12:01 +03:00
/* Register internal MCA variable mca_base_env_list_internal. It can be set only during
* parsing of amca conf file and contains SHELL env variables specified via - x there .
* Its format is the same as for mca_base_env_list .
*/
( void ) mca_base_var_register ( " opal " , " mca " , " base " , " env_list_internal " ,
" Store SHELL env variables from amca conf file " ,
MCA_BASE_VAR_TYPE_STRING , NULL , 0 , MCA_BASE_VAR_FLAG_INTERNAL , OPAL_INFO_LVL_3 ,
MCA_BASE_VAR_SCOPE_READONLY , & mca_base_env_list_internal ) ;
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
/* The ddt engine has a few parameters */
Brice Goglin noticed that mpi_paffinity_alone didn't seem to be doing
anything for non-MPI apps. Oops! (But before you freak out, gentle
reader, note that mpi_paffinity_alone for MPI apps still worked fine)
When we made the switchover somewhere in the 1.3 series to have the
orted's do processor binding, then stuff like:
mpirun --mca mpi_paffinity_alone 1 hostname
should have bound hostname to processor 0. But it didn't because of a
subtle startup ordering issue: the MCA param registration for
opal_paffinity_alone was in the paffinity base (vs. being in
opal/runtime/opal_params.c), but it didn't actually get registered
until after the global variable opal_paffinity_alone was checked to
see if we wanted old-style affinity bindings. Oops.
However, for MPI apps, even though the orted didn't do the binding,
ompi_mpi_init() would notice that opal_paffinity_alone was set, yet
the process didn't seem to be bound. So the MPI process would bind
itself (this was done to support the running-without-orteds
scenarios). Hence, MPI apps still obeyed mpi_paffinity_alone
semantics.
But note that the error described above caused the new mpirun switch
--report-bindings to not work with mpi_paffinity_alone=1, meaning that
the orted would not report the bindings when mpi_paffinity_alone was
set to 1 (it ''did'' correctly report bindings if you used
--bind-to-core or one of the other binding options).
This commit separates out the paffinity base MCA param registration
into a small function that can be called at the Right place during the
startup sequence.
This commit was SVN r22602.
2010-02-11 01:32:00 +03:00
ret = opal_datatype_register_params ( ) ;
if ( OPAL_SUCCESS ! = ret ) {
return ret ;
}
2013-03-28 01:09:41 +04:00
/* dss has parameters */
ret = opal_dss_register_vars ( ) ;
2015-06-24 06:59:57 +03:00
if ( OPAL_SUCCESS ! = ret ) {
return ret ;
2013-03-28 01:09:41 +04:00
}
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
return OPAL_SUCCESS ;
2006-01-11 08:02:15 +03:00
}
2014-07-15 09:20:26 +04:00
int opal_deregister_params ( void )
{
opal_register_done = false ;
return OPAL_SUCCESS ;
}