
Also added infrastructure to have developers write man pages in Markdown (vs. nroff). Pandoc >=v1.12 is used to convert those Markdown files into actual nroff man pages. Dist tarballs will contain generated nroff man pages; we don't want to require users to have Pandoc installed. Anyone who builds Open MPI from a git clone will need to have Pandoc installed (similar to how we treat Flex). You can opt out of Open MPI's Pandoc-generated man pages by configuring Open MPI with --disable-man-pages. This will also disable "make dist" (i.e., "make dist" will error if you configured with --disable-man-pages). Also removed the stuff to re-generate man pages. This commit also: 1. Includes a new man page, written in Markdown (ompi/mpi/man/man5/MPI_T.5.md) that contains Open MPI-specific information about MPI_T. 2. Includes a converted ompi/mpi/man/man3/MPI_T_init_thread.3.md (from MPI_T_init_thread.3in -- i.e., nroff) just to show that Markdown can be used throughout the Open MPI code base for man pages. 3. Made the Makefiles in ompi/mpi/man/man?/ be full-fledged Makefile.am's (vs. Makefile.extras that are designed to be included in ompi/Makefile.am). It is more convenient to test generation / installation of man pages when you can "make" and "make install" in their respective directories (vs. doing a build / install for the entire ompi project). 4. Removed logic from ompi/Makefile.am that re-generated man pages if opal_config.h changes. Other man pages -- hopefully all of them! -- will be converted to Markdown over time. Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
83 строки
3.7 KiB
Markdown
83 строки
3.7 KiB
Markdown
# NAME
|
|
|
|
Open MPI's MPI_T interface - General information
|
|
|
|
# DESCRIPTION
|
|
|
|
There are a few Open MPI-specific notes worth mentioning about its `MPI_T` interface implementation.
|
|
|
|
## MPI_T Control Variables
|
|
|
|
Open MPI's implementation of the `MPI_T` Control Variable ("cvar") APIs is an interface to Open MPI's underlying Modular Component Architecture (MCA) parameters/variables. Simply put: using the `MPI_T` cvar interface is another mechanism to get/set Open MPI MCA parameters.
|
|
|
|
In order of precedence (highest to lowest), Open MPI provides the following mechanisms to set MCA parameters:
|
|
|
|
1. The `MPI_T` interface has the highest precedence. Specifically: values set via the `MPI_T` interface will override all other settings.
|
|
1. The `mpirun(1)` / `mpiexec(1)` command line (e.g., via the `--mca` parameter).
|
|
1. Environment variables.
|
|
1. Parameter files have the lowest precedence. Specifically: values set via parameter files can be overridden by any of the other MCA-variable setting mechanisms.
|
|
|
|
## MPI initialization
|
|
|
|
An application may use the `MPI_T` interface before MPI is initialized to set MCA parameters. Setting MPI-level MCA parameters before MPI is initialized may affect _how_ MPI is initialized (e.g., by influencing which frameworks and components are selected).
|
|
|
|
The following example sets the `pml` and `btl` MCA params before invoking `MPI_Init(3)` in order to force a specific selection of PML and BTL components:
|
|
|
|
```c
|
|
int provided, index, count;
|
|
MPI_T_cvar_handle pml_handle, btl_handle;
|
|
char pml_value[64], btl_value[64];
|
|
|
|
MPI_T_init_thread(MPI_THREAD_SINGLE, &provided);
|
|
|
|
MPI_T_cvar_get_index("pml", &index);
|
|
MPI_T_cvar_handle_alloc(index, NULL, &pml_handle, &count);
|
|
MPI_T_cvar_write(pml_handle, "ob1");
|
|
|
|
MPI_T_cvar_get_index("btl", &index);
|
|
MPI_T_cvar_handle_alloc(index, NULL, &btl_handle, &count);
|
|
MPI_T_cvar_write(btl_handle, "tcp,vader,self");
|
|
|
|
MPI_T_cvar_read(pml_handle, pml_value);
|
|
MPI_T_cvar_read(btl_handle, btl_value);
|
|
printf("Set value of cvars: PML: %s, BTL: %s\n",
|
|
pml_value, btl_value);
|
|
|
|
MPI_T_cvar_handle_free(&pml_handle);
|
|
MPI_T_cvar_handle_free(&btl_handle);
|
|
|
|
MPI_Init(NULL, NULL);
|
|
|
|
// ...
|
|
|
|
MPI_Finalize();
|
|
|
|
MPI_T_finalize();
|
|
```
|
|
|
|
Note that once MPI is initialized, most Open MPI cvars become read-only.
|
|
|
|
For example, after MPI is initialized, it is no longer possible to set the PML and BTL selection mechanisms. This is because many of these MCA parameters are only used during MPI initialization; setting them after MPI has already been initialized would be meaningless, anyway.
|
|
|
|
## MPI_T Categories
|
|
|
|
Open MPI's MPI_T categories are organized hierarchically:
|
|
|
|
1. Layer (or "project"). There are two layers in Open MPI:
|
|
* `ompi`: This layer contains cvars, pvars, and sub categories related to MPI characteristics.
|
|
* `opal`: This layer generally contains cvars, pvars, and sub categories of lower-layer constructions, such as operating system issues, networking issues, etc.
|
|
2. Framework or section.
|
|
* In most cases, the next level in the hierarchy is the Open MPI MCA framework.
|
|
* For example, you can find the `btl` framework under the `opal` layer (because it has to do with the underlying networking).
|
|
* Additionally, the `pml` framework is under the `ompi` layer (because it has to do with MPI semantics of point-to-point messaging).
|
|
* There are a few non-MCA-framework entities under the layer, however.
|
|
* For example, there is an `mpi` section under both the `opal` and `ompi` layers for general/core MPI constructs.
|
|
3. Component.
|
|
* If relevant, the third level in the hierarchy is the MCA component.
|
|
* For example, the `tcp` component can be found under the `opal` framework in the `opal` layer.
|
|
|
|
# SEE ALSO
|
|
|
|
[`MPI_T_init`(3)](MPI_T_init.html),
|
|
[`MPI_T_finalize`(3)](MPI_T_finalize.html)
|