2004-07-14 18:11:03 +04:00
|
|
|
// -*- c++ -*-
|
|
|
|
//
|
2005-11-05 22:57:48 +03:00
|
|
|
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
// University Research and Technology
|
|
|
|
// Corporation. All rights reserved.
|
|
|
|
// Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
// of Tennessee Research Foundation. All rights
|
|
|
|
// reserved.
|
2004-11-28 23:09:25 +03:00
|
|
|
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
// University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
// Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
// All rights reserved.
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
// $COPYRIGHT$
|
|
|
|
//
|
|
|
|
// Additional copyrights may follow
|
|
|
|
//
|
2004-07-14 18:11:03 +04:00
|
|
|
// $HEADER$
|
|
|
|
//
|
|
|
|
|
|
|
|
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
|
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::Op() { }
|
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::Op(const MPI::Op& o) : pmpi_op(o.pmpi_op) { }
|
|
|
|
|
|
|
|
inline
|
This is a workaround to bug in the Intel C++ compiler, version 9.1
(all versions up to and including 20060925). The issue has been
reported to Intel, along with a small [non-MPI] test program that
reproduces the problem (the test program and the OMPI C++ bindings
work fine with Intel C++ 9.0 and many other C++ compilers).
In short, a static initializer for a global variable (i.e., its
constructor is fired before main()) that takes as an argument a
reference to a typedef'd type will simply get the wrong value in the
argument. Specifically:
{{{
namespace MPI {
Intracomm COMM_WORLD(MPI_COMM_WORLD);
}
}}}
The constructor for MPI::Intracomm should get the value of
&ompi_mpi_comm_world. It does not; it seems to get a random value.
As mandated by MPI-2, annex B.13.4, for C/C++ interoperability, the
prototype for this constructor is:
{{{
class Intracomm {
public:
Intracomm(const MPI_Comm& data);
};
}}}
Experiments with icpc 9.1/20060925 have shown that removing the
reference from the prototype makes it work (!). After lots of
discussions about this issue with a C++ expert (Doug Gregor from IU),
we decided the following (cut-n-paste from an e-mail):
-----
> So here's my question: given that OMPI's MPI_<CLASS> types are all
> pointers, is there any legal MPI program that adheres to the above
> bindings that would fail to compile or work properly if we simply
> removed the "&" from the second binding, above?
I don't know of any way that a program could detect this change. FWIW,
the C++ committee has agreed that implementation of the C++ standard
library are allowed to decide arbitrarily between const& and by-value.
If they don't care, MPI users won't care.
When you remove the '&', I suggest also removing the "const". It is
redundant, but can trigger some strange name mangling in Sun's C++
compiler.
-----
So with this change:
* we now work again with the Intel 9.1 compiler
* our C++ bindings do not exactly conform to the MPI-2 spec, but
valid/legal MPI C++ apps cannot tell the difference (i.e., the
functionality is the same)
This commit was SVN r12514.
2006-11-09 20:34:12 +03:00
|
|
|
MPI::Op::Op(MPI_Op o) : pmpi_op(o) { }
|
2004-07-14 18:11:03 +04:00
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::~Op() { }
|
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op& MPI::Op::operator=(const MPI::Op& op) {
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
pmpi_op = op.pmpi_op; return *this;
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
// comparison
|
|
|
|
inline bool
|
|
|
|
MPI::Op::operator== (const MPI::Op &a) {
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
return (bool)(pmpi_op == a.pmpi_op);
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
inline bool
|
|
|
|
MPI::Op::operator!= (const MPI::Op &a) {
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
return (bool)!(*this == a);
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
// inter-language operability
|
|
|
|
inline MPI::Op&
|
|
|
|
MPI::Op::operator= (const MPI_Op &i) { pmpi_op = i; return *this; }
|
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::operator MPI_Op () const { return pmpi_op; }
|
|
|
|
|
|
|
|
//inline
|
|
|
|
//MPI::Op::operator MPI_Op* () { return pmpi_op; }
|
|
|
|
|
|
|
|
|
|
|
|
#else // ============= NO PROFILING ===================================
|
|
|
|
|
|
|
|
// construction
|
|
|
|
inline
|
|
|
|
MPI::Op::Op() : mpi_op(MPI_OP_NULL) { }
|
|
|
|
|
|
|
|
inline
|
This is a workaround to bug in the Intel C++ compiler, version 9.1
(all versions up to and including 20060925). The issue has been
reported to Intel, along with a small [non-MPI] test program that
reproduces the problem (the test program and the OMPI C++ bindings
work fine with Intel C++ 9.0 and many other C++ compilers).
In short, a static initializer for a global variable (i.e., its
constructor is fired before main()) that takes as an argument a
reference to a typedef'd type will simply get the wrong value in the
argument. Specifically:
{{{
namespace MPI {
Intracomm COMM_WORLD(MPI_COMM_WORLD);
}
}}}
The constructor for MPI::Intracomm should get the value of
&ompi_mpi_comm_world. It does not; it seems to get a random value.
As mandated by MPI-2, annex B.13.4, for C/C++ interoperability, the
prototype for this constructor is:
{{{
class Intracomm {
public:
Intracomm(const MPI_Comm& data);
};
}}}
Experiments with icpc 9.1/20060925 have shown that removing the
reference from the prototype makes it work (!). After lots of
discussions about this issue with a C++ expert (Doug Gregor from IU),
we decided the following (cut-n-paste from an e-mail):
-----
> So here's my question: given that OMPI's MPI_<CLASS> types are all
> pointers, is there any legal MPI program that adheres to the above
> bindings that would fail to compile or work properly if we simply
> removed the "&" from the second binding, above?
I don't know of any way that a program could detect this change. FWIW,
the C++ committee has agreed that implementation of the C++ standard
library are allowed to decide arbitrarily between const& and by-value.
If they don't care, MPI users won't care.
When you remove the '&', I suggest also removing the "const". It is
redundant, but can trigger some strange name mangling in Sun's C++
compiler.
-----
So with this change:
* we now work again with the Intel 9.1 compiler
* our C++ bindings do not exactly conform to the MPI-2 spec, but
valid/legal MPI C++ apps cannot tell the difference (i.e., the
functionality is the same)
This commit was SVN r12514.
2006-11-09 20:34:12 +03:00
|
|
|
MPI::Op::Op(MPI_Op i) : mpi_op(i) { }
|
2004-07-14 18:11:03 +04:00
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::Op(const MPI::Op& op)
|
2005-12-23 19:49:09 +03:00
|
|
|
: mpi_op(op.mpi_op) { }
|
2004-07-14 18:11:03 +04:00
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::~Op()
|
|
|
|
{
|
|
|
|
#if 0
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
mpi_op = MPI_OP_NULL;
|
|
|
|
op_user_function = 0;
|
2004-07-14 18:11:03 +04:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
inline MPI::Op&
|
|
|
|
MPI::Op::operator=(const MPI::Op& op) {
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
mpi_op = op.mpi_op;
|
|
|
|
return *this;
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
// comparison
|
|
|
|
inline bool
|
|
|
|
MPI::Op::operator== (const MPI::Op &a) { return (bool)(mpi_op == a.mpi_op); }
|
|
|
|
|
|
|
|
inline bool
|
|
|
|
MPI::Op::operator!= (const MPI::Op &a) { return (bool)!(*this == a); }
|
|
|
|
|
|
|
|
// inter-language operability
|
|
|
|
inline MPI::Op&
|
|
|
|
MPI::Op::operator= (const MPI_Op &i) { mpi_op = i; return *this; }
|
|
|
|
|
|
|
|
inline
|
|
|
|
MPI::Op::operator MPI_Op () const { return mpi_op; }
|
|
|
|
|
|
|
|
//inline
|
|
|
|
//MPI::Op::operator MPI_Op* () { return &mpi_op; }
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2005-12-23 19:49:09 +03:00
|
|
|
// Extern this function here rather than include an internal Open MPI
|
|
|
|
// header file (and therefore force installing the internal Open MPI
|
|
|
|
// header file so that user apps can #include it)
|
|
|
|
|
|
|
|
extern "C" void ompi_op_set_cxx_callback(MPI_Op op, MPI_User_function*);
|
|
|
|
|
|
|
|
// There is a lengthy comment in ompi/mpi/cxx/intercepts.cc explaining
|
|
|
|
// what this function is doing. Please read it before modifying this
|
|
|
|
// function.
|
2004-07-14 18:11:03 +04:00
|
|
|
inline void
|
|
|
|
MPI::Op::Init(MPI::User_function *func, bool commute)
|
|
|
|
{
|
2005-12-23 19:49:09 +03:00
|
|
|
(void)MPI_Op_create((MPI_User_function*) ompi_mpi_cxx_op_intercept,
|
|
|
|
(int) commute, &mpi_op);
|
|
|
|
ompi_op_set_cxx_callback(mpi_op, (MPI_User_function*) func);
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
inline void
|
|
|
|
MPI::Op::Free()
|
|
|
|
{
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
(void)MPI_Op_free(&mpi_op);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
inline void
|
|
|
|
MPI::Op::Reduce_local(const void *inbuf, void *inoutbuf, int count,
|
|
|
|
const MPI::Datatype& datatype) const
|
|
|
|
{
|
|
|
|
(void)MPI_Reduce_local(const_cast<void*>(inbuf), inoutbuf, count,
|
|
|
|
datatype, mpi_op);
|
2004-07-14 18:11:03 +04:00
|
|
|
}
|
2009-10-23 01:46:05 +04:00
|
|
|
|
|
|
|
|
|
|
|
inline bool
|
|
|
|
MPI::Op::Is_commutative(void) const
|
|
|
|
{
|
|
|
|
int commute;
|
|
|
|
(void)MPI_Op_commutative(mpi_op, &commute);
|
2013-05-21 20:06:08 +04:00
|
|
|
return (bool) commute;
|
2009-10-23 01:46:05 +04:00
|
|
|
}
|