Often, orte/util/show_help.h is included, although no functionality
is required -- instead, most often opal_output.h, or
orte/mca/rml/rml_types.h
Please see orte_show_help_replacement.sh commited next.
- Local compilation (Linux/x86_64) w/ -Wimplicit-function-declaration
actually showed two *missing* #include "orte/util/show_help.h"
in orte/mca/odls/base/odls_base_default_fns.c and
in orte/tools/orte-top/orte-top.c
Manually added these.
Let's have MTT the last word.
This commit was SVN r20557.
have different sizes:
1. Do not modify the read only parameter of the Fortran MPI interface (i.e be
standard compliant).
2. When Fortran integers are 64 bits long, don't generate unlawful code.
Thanks to Christoph van Wullen for the bug report.
This commit was SVN r20420.
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogenhttps://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponenthttps://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
and ::SEEK_SET (duh); that's why it's listed in constants.h. So put
that back and make it (static const int) rather than extern, and then
remove the instantiation from mpicxx.cc. Ditto for the other 2.
This commit was SVN r20251.
The following Trac tickets were found above:
Ticket 623 --> https://svn.open-mpi.org/trac/ompi/ticket/623
in mpicxx.h a while ago, but somehow accidentally left "extern const
int" for SEEK_SET (and friends) in constants.h. This commit removes
the extraneous "extern" versions.
This commit was SVN r20250.
The following Trac tickets were found above:
Ticket 623 --> https://svn.open-mpi.org/trac/ompi/ticket/623
pondering about this problem, we came to the conclusion that the best approach
is to keep what we had before (i.e. the original approach).
The main reason for this is being nice with tool developers. In the current
incarnation, they can either catch the Fortran calls or the C calls. If they
provide both, then they will have to figure out how to cope with the double
calls (as your example highlight).
Here is the behavior Open MPI will stick too:
Fortran MPI -> C MPI
Fortran PMPI -> C MPI
However, the is another possible approach. This might avoid the double calls
while preserving the tool writers friendliness. This possible approach will do:
Fortran MPI -> C MPI
Fortran PMPI -> C PMPI
^
Unfortunately, we will have to heavily modify all files in the Fortran
interface layer in order to support this approach.
This commit was SVN r20079.
1. fix a bug in pml_ob1_recvreq/sendreq.c, buffer was made defined where the request has already been released.
2. complete memchecker support for collective functions.
3. change the wrongly spelled function name of memchecker, i.e. '*_isaddressible' should be '*_isaddressable'
This commit was SVN r20043.
I'm unable to split it in two parts, my patch and Edgar's one. So I just update
copyright information for both of us.
What this patch do:
- it use the unexpected queue create by commit r19562 to dispatch the
unexpected message to the right communicator (once this communicator
is created and initialized).
- delay the PML comm_add until we have the context_id for the new communicator.
- only do the PML comm_add on processes that really belong to the new
communicator. Please read the lengthy comment in the source code for the
reason behind this.
This commit was SVN r19929.
The following SVN revision numbers were found above:
r19562 --> open-mpi/ompi@acd3406aa7
unconditionally, which can result in a flood of messages to the user
if all MPI processes invoke abort. Additionally, some users were
confused because they saw the MPI_ABORT opal_output() messages from
''some'' MPI processes, but not ''all'' of them (despite the fact that
every MPI process supposedly invoked MPI_ABORT). The reason is that
calling MPI_ABORT triggers ORTE to kill all MPI processes, so it's a
race condition as to whether a) all MPI processes actually invoke
MPI_ABORT, and/or b) whether every process is able to opal_output()
before they are killed.
This commit does two simple things:
* Now use orte_show_help() for the MPI_ABORT message, so they are
aggregated.
* Add a note in the message that calling MPI_ABORT kills all
processes, so you might not see all output, yadda yadda yadda.
This commit was SVN r19735.
This fixes trac:1477.
Help provided by Jeff and Terry.
This commit was SVN r19533.
The following Trac tickets were found above:
Ticket 1477 --> https://svn.open-mpi.org/trac/ompi/ticket/1477
MPI::SEEK_SET and friends, suggested by Doug Gregor. This way allows
users to utilize SEEK_SET in a case statement, which they could not do
with our previous method.
This commit was SVN r19494.
way".
Don't modify coords in the top-level API function because coords is an
IN variable. Instead, as Nysal noted, the real cause of the problem
was a missing ! down in topo_base_cart_rank.c. Put a comment down in
topo_base_cart_rank.c explaining what's going on so that the code is
not so cryptic.
Refs trac:1363.
This commit was SVN r19487.
The following Trac tickets were found above:
Ticket 1363 --> https://svn.open-mpi.org/trac/ompi/ticket/1363
* Various changes to enable 0-dimensional cartesian communicators:
* Set various mtc_* members to NULL when there are 0 dimensions (and
don't bother trying to memcpy these arrays when duplicating the
communicator -- because they're NULL)
* adjust topo_base_cart_sub to correctly handle 0 dimensions
(simplified it a bit)
* adjust a few error codes to return ERR_OUT_OF_RESOURCE
* adjust error checking of CART_CREATE, CART_RANK
* Allow MPI_GRAPH_CREATE to accept 0 == nnodes.
* Bump reported MPI version in mpi.h to 2.1
This commit was SVN r19461.
The following Trac tickets were found above:
Ticket 1236 --> https://svn.open-mpi.org/trac/ompi/ticket/1236
for the F90 type create functions to the requirements of MPI 2.1 standard.
Advice to implementors. An application may often repeat a call to
MPI_TYPE_CREATE_F90_xxxx with the same combination of (xxxx,p,r).
The application is not allowed to free the returned predefined, unnamed
datatype handles. To prevent the creation of a potentially huge amount of
handles, the MPI implementation should return the same datatype handle for
the same (REAL/COMPLEX/INTEGER,p,r) combination. Checking for the
combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and
using a hash-table to find formerly generated handles should limit the
overhead of finding a previously generated datatype with same combination
of (xxxx,p,r). (End of advice to implementors.)
This commit fixes trac:1239, and #712.
This commit was SVN r19458.
The following Trac tickets were found above:
Ticket 1239 --> https://svn.open-mpi.org/trac/ompi/ticket/1239
Possibly fixes CID 417
* ensure to initialize inner members upon default constructors using
the same syntax across all classes
* remove some member variables that aren't used anymore
* "initialize" the inner MPI_Status in the default constructor for
MPI::Status by calling the default constructor mpi_status(). This
may or may not silence CID 417; we'll see what happens in
subsequent Coverity Prevent runs.
This commit was SVN r19228.
* Make the creation of the build dir for the man pages a bit more
robust (thanks to suggestions from Ralf W.).
* Only distribute the .Xin files, not the .X man pages themselves.
* Make the .X files depend on opal_config.h so that if you re-run
configure and change opal_config.h (e.g., a new version), the man
pages should get rebuilt.
* Man pages are now cleaned with "distclean", not "maintainer-clean".
* Fix a typo in opal_crs.7in.
* Udpate make_dist_tarball to update "date" in the VERSION file.
* Make make_dist_tarball a bit friendlier to hg checkouts.
This commit was SVN r19219.