1
1
openmpi/ompi/mpiext/affinity/c/OMPI_Affinity_str.3in
Jeff Squyres 253444c6d0 == Highlights ==
1. New mpifort wrapper compiler: you can utilize mpif.h, use mpi, and use mpi_f08 through this one wrapper compiler
 1. mpif77 and mpif90 still exist, but are sym links to mpifort and may be removed in a future release
 1. The mpi module has been re-implemented and is significantly "mo' bettah"
 1. The mpi_f08 module offers many, many improvements over mpif.h and the mpi module

This stuff is coming from a VERY long-lived mercurial branch (3 years!); it'll almost certainly take a few SVN commits and a bunch of testing before I get it correctly committed to the SVN trunk.

== More details ==

Craig Rasmussen and I have been working with the MPI-3 Fortran WG and Fortran J3 committees for a long, long time to make a prototype MPI-3 Fortran bindings implementation.  We think we're at a stable enough state to bring this stuff back to the trunk, with the goal of including it in OMPI v1.7.  

Special thanks go out to everyone who has been incredibly patient and helpful to us in this journey:

 * Rolf Rabenseifner/HLRS (mastermind/genius behind the entire MPI-3 Fortran effort)
 * The Fortran J3 committee
 * Tobias Burnus/gfortran
 * Tony !Goetz/Absoft
 * Terry !Donte/Oracle
 * ...and probably others whom I'm forgetting :-(

There's still opportunities for optimization in the mpi_f08 implementation, but by and large, it is as far along as it can be until Fortran compilers start implementing the new F08 dimension(..) syntax.

Note that gfortran is currently unsupported for the mpi_f08 module and the new mpi module.  gfortran users will a) fall back to the same mpi module implementation that is in OMPI v1.5.x, and b) not get the new mpi_f08 module.  The gfortran maintainers are actively working hard to add the necessary features to support both the new mpi_f08 module and the new mpi module implementations.  This will take some time.

As mentioned above, ompi/mpi/f77 and ompi/mpi/f90 no longer exist.  All the fortran bindings implementations have been collated under ompi/mpi/fortran; each implementation has its own subdirectory:

{{{
ompi/mpi/fortran/
  base/               - glue code
  mpif-h/             - what used to be ompi/mpi/f77
  use-mpi-tkr/        - what used to be ompi/mpi/f90
  use-mpi-ignore-tkr/ - new mpi module implementation
  use-mpi-f08/        - new mpi_f08 module implementation
}}}

There's also a prototype 6-function-MPI implementation under use-mpi-f08-desc that emulates the new F08 dimension(..) syntax that isn't fully available in Fortran compilers yet.  We did that to prove it to ourselves that it could be done once the compilers fully support it.  This directory/implementation will likely eventually replace the use-mpi-f08 version.

Other things that were done:

 * ompi_info grew a few new output fields to describe what level of Fortran support is included
 * Existing Fortran examples in examples/ were renamed; new mpi_f08 examples were added
 * The old Fortran MPI libraries were renamed:
   * libmpi_f77 -> libmpi_mpifh
   * libmpi_f90 -> libmpi_usempi
 * The configury for Fortran was consolidated and significantly slimmed down.  Note that the F77 env variable is now IGNORED for configure; you should only use FC. Example:
{{{
shell$ ./configure CC=icc CXX=icpc FC=ifort ...
}}}

All of this work was done in a Mercurial branch off the SVN trunk, and hosted at Bitbucket.  This branch has got to be one of OMPI's longest-running branches.  Its first commit was Tue Apr 07 23:01:46 2009 -0400 -- it's over 3 years old!  :-)  We think we've pulled in all relevant changes from the OMPI trunk (e.g., Fortran implementations of the new MPI-3 MPROBE stuff for mpif.h, use mpi, and use mpi_f08, and the recent Fujitsu Fortran patches).

I anticipate some instability when we bring this stuff into the trunk, simply because it touches a LOT of code in the MPI layer in the OMPI code base.  We'll try our best to make it as pain-free as possible, but please bear with us when it is committed.

This commit was SVN r26283.
2012-04-18 15:57:29 +00:00

198 строки
5.5 KiB
Plaintext

.\" Copyright 2007-2010 Oracle and/or its affiliates. All rights reserved.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2010 Cisco Systems, Inc. All rights reserved.
.TH OMPI_Affinity_str 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBOMPI_Affinity_str\fP \- Obtain prettyprint strings of processor affinity information for this process
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
#include <mpi-ext.h>
int OMPI_Affinity_str(ompi_affinity_fmt_type_t \fIfmt_type\fP,
char \fIompi_bound\fP[OMPI_AFFINITY_STRING_MAX],
char \fIcurrent_binding\fP[OMPI_AFFINITY_STRING_MAX],
char \fIexists\fP[OMPI_AFFINITY_STRING_MAX])
.fi
.SH Fortran Syntax
There is no Fortran binding for this function.
.
.SH C++ Syntax
There is no C++ binding for this function.
.
.SH INPUT PARAMETERS
.ft R
.TP 1i
fmt_type
An enum indicating how to format the returned ompi_bound and
current_binding strings. OMPI_AFFINITY_RSRC_STRING_FMT returns the
string as human-readable resource names, such as "socket 0, core 0".
OMPI_AFFINITY_LAYOUT_FMT returns ASCII art representing where this MPI
process is bound relative to the machine resource layout. For example
"[. B][. .]" shows the process that called the routine is bound to
socket 0, core 1 in a system with 2 sockets, each containing 2 cores.
See below for more output examples.
.
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
ompi_bound
A prettyprint string describing what processor(s) Open MPI bound this
process to, or a string indicating that Open MPI did not bind this
process.
.
.TP 1i
current_binding
A prettyprint string describing what processor(s) this process is
currently bound to, or a string indicating that the process is bound
to all available processors (and is therefore considered "unbound").
.
.TP 1i
exists
A prettyprint string describing the available sockets and sockets on
this host.
.SH DESCRIPTION
.ft R
Open MPI may bind a process to specific sockets and/or cores at
process launch time. This non-standard Open MPI function call returns
prettyprint information about three things:
.
.TP
Where Open MPI bound this process.
The string returned in
.B
ompi_bound
will either indicate that Open MPI did not bind this process to
anything, or it will contain a prettyprint description of the
processor(s) to which Open MPI bound this process.
.
.TP
Where this process is currently bound.
Regardless of whether Open MPI bound this process or not, another
entity may have bound it. The string returned in
.B current_binding
will indicate what the
.I
current
binding is of this process, regardless of what Open MPI may have done
earlier. The string returned will either indicate that the process is
unbound (meaning that it is bound to all available processors) or it
will contain a prettyprint description of the sockets and cores to
which the process is currently bound.
.
.TP
What processors exist.
As a convenience to the user, the
.B
exists
string will contain a prettyprint description of the sockets and cores
that this process can see (which is
.I usually
all processors in the system).
.SH Examples
.ft R
\fBExample 1:\fP Print out processes binding using resource string format.
.sp
.nf
int rank;
char ompi_bound[OMPI_AFFINITY_STRING_MAX];
char current_binding[OMPI_AFFINITY_STRING_MAX];
char exists[OMPI_AFFINITY_STRING_MAX];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
OMPI_Affinity_str(OMPI_AFFINITY_RSRC_STRING_FMT,
ompi_bound, current_binding, exists);
printf("rank %d: \\n"
" ompi_bound: %s\\n"
" current_binding: %s\\n"
" exists: %s\\n",
rank, ompi_bound, current_binding, exists);
...
.fi
.PP
Output of mpirun -np 2 -bind-to-core a.out:
.nf
rank 0:
ompi_bound: socket 0[core 0]
current_binding: socket 0[core 0]
exists: socket 0 has 4 cores
rank 1:
ompi_bound: socket 0[core 1]
current_binding: socket 0[core 1]
exists: socket 0 has 4 cores
.fi
.PP
Output of mpirun -np 2 -bind-to-socket a.out:
.nf
rank 0:
ompi_bound: socket 0[core 0-3]
current_binding: Not bound (or bound to all available processors)
exists: socket 0 has 4 cores
rank 1:
ompi_bound: socket 0[core 0-3]
current_binding: Not bound (or bound to all available processors)
exists: socket 0 has 4 cores
.fi
.sp
.br
\fBExample 2:\fP Print out processes binding using layout string format.
.sp
.nf
int rank;
char ompi_bound[OMPI_AFFINITY_STRING_MAX];
char current_binding[OMPI_AFFINITY_STRING_MAX];
char exists[OMPI_AFFINITY_STRING_MAX];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
OMPI_Affinity_str(OMPI_AFFINITY_LAYOUT_FMT,
ompi_bound, current_binding, exists);
printf("rank %d: \\n"
" ompi_bound: %s\\n"
" current_binding: %s\\n"
" exists: %s\\n",
rank, ompi_bound, current_binding, exists);
...
.fi
.PP
Output of mpirun -np 2 -bind-to-core a.out:
.nf
rank 0:
ompi_bound: [B . . .]
current_binding: [B . . .]
exists: [. . . .]
rank 1:
ompi_bound: [. B . .]
current_binding: [. B . .]
exists: [. . . .]
.fi
.PP
Output of mpirun -np 2 -bind-to-socket a.out:
.nf
rank 0:
ompi_bound: [B B B B]
current_binding: [B B B B]
exists: [. . . .]
rank 1:
ompi_bound: [B B B B]
current_binding: [B B B B]
exists: [. . . .]
.fi
.SH See Also
.ft R
.nf
mpirun(1)
.fi