
Also added infrastructure to have developers write man pages in Markdown (vs. nroff). Pandoc >=v1.12 is used to convert those Markdown files into actual nroff man pages. Dist tarballs will contain generated nroff man pages; we don't want to require users to have Pandoc installed. Anyone who builds Open MPI from a git clone will need to have Pandoc installed (similar to how we treat Flex). You can opt out of Open MPI's Pandoc-generated man pages by configuring Open MPI with --disable-man-pages. This will also disable "make dist" (i.e., "make dist" will error if you configured with --disable-man-pages). Also removed the stuff to re-generate man pages. This commit also: 1. Includes a new man page, written in Markdown (ompi/mpi/man/man5/MPI_T.5.md) that contains Open MPI-specific information about MPI_T. 2. Includes a converted ompi/mpi/man/man3/MPI_T_init_thread.3.md (from MPI_T_init_thread.3in -- i.e., nroff) just to show that Markdown can be used throughout the Open MPI code base for man pages. 3. Made the Makefiles in ompi/mpi/man/man?/ be full-fledged Makefile.am's (vs. Makefile.extras that are designed to be included in ompi/Makefile.am). It is more convenient to test generation / installation of man pages when you can "make" and "make install" in their respective directories (vs. doing a build / install for the entire ompi project). 4. Removed logic from ompi/Makefile.am that re-generated man pages if opal_config.h changes. Other man pages -- hopefully all of them! -- will be converted to Markdown over time. Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
3.7 KiB
NAME
Open MPI's MPI_T interface - General information
DESCRIPTION
There are a few Open MPI-specific notes worth mentioning about its MPI_T
interface implementation.
MPI_T Control Variables
Open MPI's implementation of the MPI_T
Control Variable ("cvar") APIs is an interface to Open MPI's underlying Modular Component Architecture (MCA) parameters/variables. Simply put: using the MPI_T
cvar interface is another mechanism to get/set Open MPI MCA parameters.
In order of precedence (highest to lowest), Open MPI provides the following mechanisms to set MCA parameters:
- The
MPI_T
interface has the highest precedence. Specifically: values set via theMPI_T
interface will override all other settings. - The
mpirun(1)
/mpiexec(1)
command line (e.g., via the--mca
parameter). - Environment variables.
- Parameter files have the lowest precedence. Specifically: values set via parameter files can be overridden by any of the other MCA-variable setting mechanisms.
MPI initialization
An application may use the MPI_T
interface before MPI is initialized to set MCA parameters. Setting MPI-level MCA parameters before MPI is initialized may affect how MPI is initialized (e.g., by influencing which frameworks and components are selected).
The following example sets the pml
and btl
MCA params before invoking MPI_Init(3)
in order to force a specific selection of PML and BTL components:
int provided, index, count;
MPI_T_cvar_handle pml_handle, btl_handle;
char pml_value[64], btl_value[64];
MPI_T_init_thread(MPI_THREAD_SINGLE, &provided);
MPI_T_cvar_get_index("pml", &index);
MPI_T_cvar_handle_alloc(index, NULL, &pml_handle, &count);
MPI_T_cvar_write(pml_handle, "ob1");
MPI_T_cvar_get_index("btl", &index);
MPI_T_cvar_handle_alloc(index, NULL, &btl_handle, &count);
MPI_T_cvar_write(btl_handle, "tcp,vader,self");
MPI_T_cvar_read(pml_handle, pml_value);
MPI_T_cvar_read(btl_handle, btl_value);
printf("Set value of cvars: PML: %s, BTL: %s\n",
pml_value, btl_value);
MPI_T_cvar_handle_free(&pml_handle);
MPI_T_cvar_handle_free(&btl_handle);
MPI_Init(NULL, NULL);
// ...
MPI_Finalize();
MPI_T_finalize();
Note that once MPI is initialized, most Open MPI cvars become read-only.
For example, after MPI is initialized, it is no longer possible to set the PML and BTL selection mechanisms. This is because many of these MCA parameters are only used during MPI initialization; setting them after MPI has already been initialized would be meaningless, anyway.
MPI_T Categories
Open MPI's MPI_T categories are organized hierarchically:
- Layer (or "project"). There are two layers in Open MPI:
ompi
: This layer contains cvars, pvars, and sub categories related to MPI characteristics.opal
: This layer generally contains cvars, pvars, and sub categories of lower-layer constructions, such as operating system issues, networking issues, etc.
- Framework or section.
- In most cases, the next level in the hierarchy is the Open MPI MCA framework.
- For example, you can find the
btl
framework under theopal
layer (because it has to do with the underlying networking). - Additionally, the
pml
framework is under theompi
layer (because it has to do with MPI semantics of point-to-point messaging).
- For example, you can find the
- There are a few non-MCA-framework entities under the layer, however.
- For example, there is an
mpi
section under both theopal
andompi
layers for general/core MPI constructs.
- For example, there is an
- In most cases, the next level in the hierarchy is the Open MPI MCA framework.
- Component.
- If relevant, the third level in the hierarchy is the MCA component.
- For example, the
tcp
component can be found under theopal
framework in theopal
layer.