2004-04-20 22:37:46 +00:00
|
|
|
/*
|
2006-03-04 18:35:33 +00:00
|
|
|
* Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
|
2005-11-05 19:57:48 +00:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2011-01-13 06:08:54 +00:00
|
|
|
* Copyright (c) 2004-2010 The University of Tennessee and The University
|
2005-11-05 19:57:48 +00:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2007-06-19 05:03:11 +00:00
|
|
|
* Copyright (c) 2004-2007 High Performance Computing Center Stuttgart,
|
2004-11-28 20:09:25 +00:00
|
|
|
* University of Stuttgart. All rights reserved.
|
2005-03-24 12:43:37 +00:00
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
* Copyright (c) 2008-2009 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 01:38:40 +00:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
2004-04-20 22:37:46 +00:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
#ifndef OMPI_OP_BASE_FUNCTIONS_H
|
|
|
|
#define OMPI_OP_BASE_FUNCTIONS_H
|
2004-04-20 22:37:46 +00:00
|
|
|
|
2009-03-04 15:35:54 +00:00
|
|
|
#include "ompi_config.h"
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
#include "ompi/mca/op/op.h"
|
2004-04-20 22:37:46 +00:00
|
|
|
|
2004-06-29 00:02:25 +00:00
|
|
|
/*
|
|
|
|
* Since we have so many of these, and they're all identical except
|
|
|
|
* for the name, use macros to prototype them.
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
#define OMPI_OP_PROTO (void *in, void *out, int *count, struct ompi_datatype_t **dtype, struct ompi_op_base_module_1_0_0_t *module)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
|
|
|
|
/* C integer */
|
|
|
|
#define OMPI_OP_HANDLER_C_INTEGER_INTRINSIC(name) \
|
2011-01-13 06:08:54 +00:00
|
|
|
void ompi_op_base_##name##_int8_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_uint8_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_int16_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_uint16_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_int32_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_uint32_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_int64_t OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_uint64_t OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_C_INTEGER(name) \
|
2011-01-13 06:08:54 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER_INTRINSIC(name)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
|
|
|
|
/* Fortran integer */
|
2004-06-29 00:02:25 +00:00
|
|
|
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer OMPI_OP_PROTO;
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER1(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer1 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER1(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER2(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer2 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER2(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER4(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer4 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER4(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer8 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_integer16 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER16(name)
|
|
|
|
#endif
|
2004-06-29 00:02:25 +00:00
|
|
|
#define OMPI_OP_HANDLER_FORTRAN_INTEGER(name) \
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER_INTRINSIC(name) \
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER1(name) \
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER2(name) \
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER4(name) \
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER8(name) \
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER16(name)
|
2004-06-29 00:02:25 +00:00
|
|
|
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
/* Floating point */
|
|
|
|
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_float OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_double OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_fortran_real OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_fortran_double_precision OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_long_double OMPI_OP_PROTO;
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
2007-06-19 05:03:11 +00:00
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL2(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_real2 OMPI_OP_PROTO;
|
2007-06-19 05:03:11 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL2(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL4(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_real4 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL4(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_real8 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_real16 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT_REAL16(name)
|
|
|
|
#endif
|
|
|
|
#define OMPI_OP_HANDLER_FLOATING_POINT(name) \
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT_INTRINSIC(name) \
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT_REAL4(name) \
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT_REAL8(name) \
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT_REAL16(name) \
|
|
|
|
|
|
|
|
/* Logical */
|
2004-06-29 00:02:25 +00:00
|
|
|
|
|
|
|
#define OMPI_OP_HANDLER_LOGICAL(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_logical OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_bool OMPI_OP_PROTO;
|
2004-06-29 00:02:25 +00:00
|
|
|
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
/* Complex */
|
|
|
|
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_COMPLEX_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_complex OMPI_OP_PROTO;
|
2005-10-12 13:19:46 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_COMPLEX_INTRINSIC(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2005-10-12 13:19:46 +00:00
|
|
|
#define OMPI_OP_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_double_complex OMPI_OP_PROTO;
|
2005-10-12 13:19:46 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_COMPLEX8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_complex8 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_COMPLEX8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_COMPLEX16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_complex16 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_COMPLEX16(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#define OMPI_OP_HANDLER_COMPLEX32(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_fortran_complex32 OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_HANDLER_COMPLEX32(name)
|
|
|
|
#endif
|
2004-06-29 00:02:25 +00:00
|
|
|
#define OMPI_OP_HANDLER_COMPLEX(name) \
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_COMPLEX_INTRINSIC(name) \
|
2005-10-12 13:19:46 +00:00
|
|
|
OMPI_OP_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name) \
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_COMPLEX8(name) \
|
|
|
|
OMPI_OP_HANDLER_COMPLEX16(name) \
|
|
|
|
OMPI_OP_HANDLER_COMPLEX32(name)
|
|
|
|
|
|
|
|
/* Byte */
|
2004-06-29 00:02:25 +00:00
|
|
|
|
|
|
|
#define OMPI_OP_HANDLER_BYTE(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_byte OMPI_OP_PROTO;
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
|
|
|
|
/* "2 type" */
|
2004-06-29 00:02:25 +00:00
|
|
|
|
2004-07-13 15:20:46 +00:00
|
|
|
#define OMPI_OP_HANDLER_2TYPE(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_##name##_2real OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_2double_precision OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_2integer OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_float_int OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_double_int OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_long_int OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_2int OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_short_int OMPI_OP_PROTO; \
|
|
|
|
void ompi_op_base_##name##_long_double_int OMPI_OP_PROTO;
|
2004-07-13 15:20:46 +00:00
|
|
|
|
2008-08-27 15:49:40 +00:00
|
|
|
BEGIN_C_DECLS
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_MAX
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(max)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(max)
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT(max)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_MIN
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(min)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(min)
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT(min)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_SUM
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(sum)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(sum)
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT(sum)
|
|
|
|
OMPI_OP_HANDLER_COMPLEX(sum)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_PROD
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(prod)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(prod)
|
|
|
|
OMPI_OP_HANDLER_FLOATING_POINT(prod)
|
|
|
|
OMPI_OP_HANDLER_COMPLEX(prod)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_LAND
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(land)
|
|
|
|
OMPI_OP_HANDLER_LOGICAL(land)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_BAND
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(band)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(band)
|
|
|
|
OMPI_OP_HANDLER_BYTE(band)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_LOR
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(lor)
|
|
|
|
OMPI_OP_HANDLER_LOGICAL(lor)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_BOR
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(bor)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(bor)
|
|
|
|
OMPI_OP_HANDLER_BYTE(bor)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_LXOR
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(lxor)
|
|
|
|
OMPI_OP_HANDLER_LOGICAL(lxor)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_BXOR
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_C_INTEGER(bxor)
|
|
|
|
OMPI_OP_HANDLER_FORTRAN_INTEGER(bxor)
|
|
|
|
OMPI_OP_HANDLER_BYTE(bxor)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_MAXLOC
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_2TYPE(maxloc)
|
2004-04-20 22:37:46 +00:00
|
|
|
|
|
|
|
/**
|
2004-06-29 00:02:25 +00:00
|
|
|
* Handler functions for MPI_MINLOC
|
2004-04-20 22:37:46 +00:00
|
|
|
*/
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 10:23:06 +00:00
|
|
|
OMPI_OP_HANDLER_2TYPE(minloc)
|
2004-07-13 15:20:46 +00:00
|
|
|
|
2008-03-28 23:45:44 +00:00
|
|
|
/*
|
|
|
|
* 3 buffer prototypes (two input and one output)
|
|
|
|
*/
|
|
|
|
#define OMPI_OP_PROTO_3BUF \
|
|
|
|
( void * restrict in1, void * restrict in2, void * restrict out, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
int *count, struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module)
|
2008-03-28 23:45:44 +00:00
|
|
|
|
|
|
|
/* C integer */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_C_INTEGER_INTRINSIC(name) \
|
2011-01-13 06:08:54 +00:00
|
|
|
void ompi_op_base_3buff_##name##_int8_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_uint8_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_int16_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_uint16_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_int32_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_uint32_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_int64_t OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_uint64_t OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_C_INTEGER(name) \
|
2011-01-13 06:08:54 +00:00
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER_INTRINSIC(name)
|
2008-03-28 23:45:44 +00:00
|
|
|
|
|
|
|
/* Fortran integer */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer OMPI_OP_PROTO_3BUF;
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER1(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer1 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER1(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER2(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer2 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER2(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER4(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer4 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER4(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer8 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_integer16 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER16(name)
|
|
|
|
#endif
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER_INTRINSIC(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER1(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER2(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER4(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER8(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER16(name)
|
|
|
|
|
|
|
|
/* Floating point */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_float OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_double OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_fortran_real OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_fortran_double_precision OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_long_double OMPI_OP_PROTO_3BUF;
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL2(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_real2 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL2(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL4(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_real4 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL4(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_real8 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_real16 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL16(name)
|
|
|
|
#endif
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_FLOATING_POINT(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_INTRINSIC(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL4(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL8(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT_REAL16(name) \
|
|
|
|
|
|
|
|
/* Logical */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_LOGICAL(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_logical OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_bool OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
|
|
|
|
/* Complex */
|
|
|
|
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_complex OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX_INTRINSIC(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_double_complex OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX8(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_complex8 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX8(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX16(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_complex16 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX16(name)
|
|
|
|
#endif
|
2009-06-01 19:02:34 +00:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
2008-03-28 23:45:44 +00:00
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX32(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_fortran_complex32 OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
#else
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX32(name)
|
|
|
|
#endif
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_COMPLEX(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX_INTRINSIC(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_DOUBLE_COMPLEX_INTRINSIC(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX8(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX16(name) \
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX32(name)
|
|
|
|
|
|
|
|
/* Byte */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_BYTE(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_byte OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
|
|
|
|
/* "2 type" */
|
|
|
|
|
|
|
|
#define OMPI_OP_3BUFF_HANDLER_2TYPE(name) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
void ompi_op_base_3buff_##name##_2real OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_2double_precision OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_2integer OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_float_int OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_double_int OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_long_int OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_2int OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_short_int OMPI_OP_PROTO_3BUF; \
|
|
|
|
void ompi_op_base_3buff_##name##_long_double_int OMPI_OP_PROTO_3BUF;
|
2008-03-28 23:45:44 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_MAX
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(max)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(max)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT(max)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_MIN
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(min)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(min)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT(min)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_SUM
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(sum)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(sum)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT(sum)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX(sum)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_PROD
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(prod)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(prod)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FLOATING_POINT(prod)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_COMPLEX(prod)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_LAND
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(land)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_LOGICAL(land)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_BAND
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(band)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(band)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_BYTE(band)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_LOR
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(lor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_LOGICAL(lor)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_BOR
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(bor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(bor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_BYTE(bor)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_LXOR
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(lxor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_LOGICAL(lxor)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_BXOR
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_C_INTEGER(bxor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_FORTRAN_INTEGER(bxor)
|
|
|
|
OMPI_OP_3BUFF_HANDLER_BYTE(bxor)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_MAXLOC
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_2TYPE(maxloc)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Handler functions for MPI_MINLOC
|
|
|
|
*/
|
|
|
|
OMPI_OP_3BUFF_HANDLER_2TYPE(minloc)
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
/**
|
|
|
|
* Globals holding all the "base" function pointers, indexed by op and
|
|
|
|
* datatype.
|
|
|
|
*/
|
|
|
|
OMPI_DECLSPEC extern ompi_op_base_handler_fn_t
|
|
|
|
ompi_op_base_functions[OMPI_OP_BASE_FORTRAN_OP_MAX][OMPI_OP_BASE_TYPE_MAX];
|
|
|
|
OMPI_DECLSPEC extern ompi_op_base_3buff_handler_fn_t
|
|
|
|
ompi_op_base_3buff_functions[OMPI_OP_BASE_FORTRAN_OP_MAX][OMPI_OP_BASE_TYPE_MAX];
|
|
|
|
|
2008-08-27 15:49:40 +00:00
|
|
|
END_C_DECLS
|
2004-10-18 19:35:54 +00:00
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-14 23:44:31 +00:00
|
|
|
#endif /* OMPI_OP_BASE_FUNCTIONS_H */
|