2004-04-21 02:37:46 +04:00
|
|
|
/*
|
2006-03-04 21:35:33 +03:00
|
|
|
* Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
|
2005-11-05 22:57:48 +03:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2006-04-25 01:24:10 +04:00
|
|
|
* Copyright (c) 2004-2006 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2007-06-19 09:03:11 +04:00
|
|
|
* Copyright (c) 2004-2007 High Performance Computing Center Stuttgart,
|
2004-11-28 23:09:25 +03:00
|
|
|
* University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
* Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
2004-04-21 02:37:46 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2004-06-07 19:33:53 +04:00
|
|
|
#include "ompi_config.h"
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-10-20 05:03:09 +04:00
|
|
|
#ifdef HAVE_SYS_TYPES_H
|
2004-06-29 04:02:25 +04:00
|
|
|
#include <sys/types.h>
|
2004-10-20 05:03:09 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#include "ompi/mca/op/op.h"
|
|
|
|
#include "ompi/mca/op/base/functions.h"
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*
|
|
|
|
* Since all the functions in this file are essentially identical, we
|
|
|
|
* use a macro to substitute in names and types. The core operation
|
|
|
|
* in all functions that use this macro is the same.
|
|
|
|
*
|
|
|
|
* This macro is for (out op in).
|
|
|
|
*/
|
|
|
|
#define OP_FUNC(name, type_name, type, op) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_##name##_##type_name(void *in, void *out, int *count, \
|
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2004-06-29 04:02:25 +04:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a = (type *) in; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i) { \
|
|
|
|
*(b++) op *(a++); \
|
|
|
|
} \
|
2004-06-29 04:02:25 +04:00
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#define COMPLEX_OP_FUNC_SUM(type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_sum_##type_name(void *in, void *out, int *count, \
|
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module)\
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a = (type *) in; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i, ++b, ++a) { \
|
|
|
|
b->real += a->real; \
|
|
|
|
b->imag += a->imag; \
|
|
|
|
} \
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
#define COMPLEX_OP_FUNC_PROD(type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_prod_##type_name(void *in, void *out, int *count, \
|
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module)\
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a = (type *) in; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
type temp; \
|
|
|
|
for (i = 0; i < *count; ++i, ++b, ++a) { \
|
|
|
|
temp.real = a->real * b->real - a->imag * b->imag; \
|
|
|
|
temp.imag = a->imag * b->real + a->real * b->imag; \
|
|
|
|
*b = temp; \
|
|
|
|
} \
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*
|
|
|
|
* Since all the functions in this file are essentially identical, we
|
|
|
|
* use a macro to substitute in names and types. The core operation
|
|
|
|
* in all functions that use this macro is the same.
|
|
|
|
*
|
|
|
|
* This macro is for (out = op(out, in))
|
|
|
|
*/
|
|
|
|
#define FUNC_FUNC(name, type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_##name##_##type_name(void *in, void *out, int *count, \
|
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module)\
|
2004-06-29 04:02:25 +04:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a = (type *) in; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i) { \
|
|
|
|
*(b) = current_func(*(b), *(a)); \
|
|
|
|
++b; \
|
|
|
|
++a; \
|
|
|
|
} \
|
2004-06-29 04:02:25 +04:00
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-07-13 19:20:46 +04:00
|
|
|
/*
|
|
|
|
* Since all the functions in this file are essentially identical, we
|
|
|
|
* use a macro to substitute in names and types. The core operation
|
|
|
|
* in all functions that use this macro is the same.
|
|
|
|
*
|
|
|
|
* This macro is for minloc and maxloc
|
|
|
|
*/
|
|
|
|
#define LOC_STRUCT(type_name, type1, type2) \
|
|
|
|
typedef struct { \
|
2008-12-31 17:50:54 +03:00
|
|
|
type1 v; \
|
|
|
|
type2 k; \
|
2004-07-13 19:20:46 +04:00
|
|
|
} ompi_op_predefined_##type_name##_t;
|
|
|
|
|
|
|
|
#define LOC_FUNC(name, type_name, op) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_##name##_##type_name(void *in, void *out, int *count, \
|
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module)\
|
2004-07-13 19:20:46 +04:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
ompi_op_predefined_##type_name##_t *a = (ompi_op_predefined_##type_name##_t*) in; \
|
|
|
|
ompi_op_predefined_##type_name##_t *b = (ompi_op_predefined_##type_name##_t*) out; \
|
|
|
|
for (i = 0; i < *count; ++i, ++a, ++b) { \
|
|
|
|
if (a->v op b->v) { \
|
|
|
|
b->v = a->v; \
|
|
|
|
b->k = a->k; \
|
|
|
|
} else if (a->v == b->v) { \
|
|
|
|
b->k = (b->k < a->k ? b->k : a->k); \
|
|
|
|
} \
|
2004-07-13 19:20:46 +04:00
|
|
|
} \
|
|
|
|
}
|
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Max
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) > (b) ? (a) : (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(max, signed_char, signed char)
|
|
|
|
FUNC_FUNC(max, unsigned_char, unsigned char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(max, int, int)
|
|
|
|
FUNC_FUNC(max, long, long)
|
|
|
|
FUNC_FUNC(max, short, short)
|
|
|
|
FUNC_FUNC(max, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(max, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(max, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(max, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(max, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC(max, fortran_integer, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC(max, fortran_integer1, ompi_fortran_integer1_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC(max, fortran_integer2, ompi_fortran_integer2_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC(max, fortran_integer4, ompi_fortran_integer4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC(max, fortran_integer8, ompi_fortran_integer8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC(max, fortran_integer16, ompi_fortran_integer16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Floating point */
|
|
|
|
FUNC_FUNC(max, float, float)
|
|
|
|
FUNC_FUNC(max, double, double)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
FUNC_FUNC(max, long_double, long double)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
FUNC_FUNC(max, fortran_real, ompi_fortran_real_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
FUNC_FUNC(max, fortran_double_precision, ompi_fortran_double_precision_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
FUNC_FUNC(max, fortran_real2, ompi_fortran_real2_t)
|
2007-06-19 09:03:11 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
FUNC_FUNC(max, fortran_real4, ompi_fortran_real4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
FUNC_FUNC(max, fortran_real8, ompi_fortran_real8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
FUNC_FUNC(max, fortran_real16, ompi_fortran_real16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Min
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) < (b) ? (a) : (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(min, signed_char, signed char)
|
|
|
|
FUNC_FUNC(min, unsigned_char, unsigned char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(min, int, int)
|
|
|
|
FUNC_FUNC(min, long, long)
|
|
|
|
FUNC_FUNC(min, short, short)
|
|
|
|
FUNC_FUNC(min, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(min, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(min, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(min, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(min, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC(min, fortran_integer, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC(min, fortran_integer1, ompi_fortran_integer1_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC(min, fortran_integer2, ompi_fortran_integer2_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC(min, fortran_integer4, ompi_fortran_integer4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC(min, fortran_integer8, ompi_fortran_integer8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC(min, fortran_integer16, ompi_fortran_integer16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Floating point */
|
|
|
|
FUNC_FUNC(min, float, float)
|
|
|
|
FUNC_FUNC(min, double, double)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
FUNC_FUNC(min, long_double, long double)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
FUNC_FUNC(min, fortran_real, ompi_fortran_real_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
FUNC_FUNC(min, fortran_double_precision, ompi_fortran_double_precision_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
FUNC_FUNC(min, fortran_real2, ompi_fortran_real2_t)
|
2007-06-19 09:03:11 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
FUNC_FUNC(min, fortran_real4, ompi_fortran_real4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
FUNC_FUNC(min, fortran_real8, ompi_fortran_real8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
FUNC_FUNC(min, fortran_real16, ompi_fortran_real16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Sum
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
OP_FUNC(sum, signed_char, signed char, +=)
|
|
|
|
OP_FUNC(sum, unsigned_char, unsigned char, +=)
|
2004-06-29 04:02:25 +04:00
|
|
|
OP_FUNC(sum, int, int, +=)
|
|
|
|
OP_FUNC(sum, long, long, +=)
|
|
|
|
OP_FUNC(sum, short, short, +=)
|
|
|
|
OP_FUNC(sum, unsigned_short, unsigned short, +=)
|
|
|
|
OP_FUNC(sum, unsigned, unsigned, +=)
|
|
|
|
OP_FUNC(sum, unsigned_long, unsigned long, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
OP_FUNC(sum, long_long_int, long long int, +=)
|
|
|
|
OP_FUNC(sum, unsigned_long_long, unsigned long long, +=)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
OP_FUNC(sum, fortran_integer, ompi_fortran_integer_t, +=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
OP_FUNC(sum, fortran_integer1, ompi_fortran_integer1_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
OP_FUNC(sum, fortran_integer2, ompi_fortran_integer2_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
OP_FUNC(sum, fortran_integer4, ompi_fortran_integer4_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
OP_FUNC(sum, fortran_integer8, ompi_fortran_integer8_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
OP_FUNC(sum, fortran_integer16, ompi_fortran_integer16_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Floating point */
|
|
|
|
OP_FUNC(sum, float, float, +=)
|
|
|
|
OP_FUNC(sum, double, double, +=)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
OP_FUNC(sum, long_double, long double, +=)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
OP_FUNC(sum, fortran_real, ompi_fortran_real_t, +=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
OP_FUNC(sum, fortran_double_precision, ompi_fortran_double_precision_t, +=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
OP_FUNC(sum, fortran_real2, ompi_fortran_real2_t, +=)
|
2007-06-19 09:03:11 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
OP_FUNC(sum, fortran_real4, ompi_fortran_real4_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
OP_FUNC(sum, fortran_real8, ompi_fortran_real8_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
OP_FUNC(sum, fortran_real16, ompi_fortran_real16_t, +=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Complex */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_SUM(fortran_complex, ompi_fortran_complex_t)
|
2005-10-12 17:19:46 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_SUM(fortran_double_complex, ompi_fortran_double_complex_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4 && OMPI_HAVE_FORTRAN_COMPLEX8
|
|
|
|
COMPLEX_OP_FUNC_SUM(fortran_complex8, ompi_fortran_complex8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8 && OMPI_HAVE_FORTRAN_COMPLEX16
|
|
|
|
COMPLEX_OP_FUNC_SUM(fortran_complex16, ompi_fortran_complex16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_HAVE_FORTRAN_COMPLEX32
|
|
|
|
COMPLEX_OP_FUNC_SUM(fortran_complex32, ompi_fortran_complex32_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Product
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
OP_FUNC(prod, signed_char, signed char, *=)
|
|
|
|
OP_FUNC(prod, unsigned_char, unsigned char, *=)
|
2004-06-29 04:02:25 +04:00
|
|
|
OP_FUNC(prod, int, int, *=)
|
|
|
|
OP_FUNC(prod, long, long, *=)
|
|
|
|
OP_FUNC(prod, short, short, *=)
|
|
|
|
OP_FUNC(prod, unsigned_short, unsigned short, *=)
|
|
|
|
OP_FUNC(prod, unsigned, unsigned, *=)
|
|
|
|
OP_FUNC(prod, unsigned_long, unsigned long, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
2006-05-23 22:00:44 +04:00
|
|
|
OP_FUNC(prod, long_long_int, long long int, *=)
|
|
|
|
OP_FUNC(prod, unsigned_long_long, unsigned long long, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
OP_FUNC(prod, fortran_integer, ompi_fortran_integer_t, *=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
OP_FUNC(prod, fortran_integer1, ompi_fortran_integer1_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
OP_FUNC(prod, fortran_integer2, ompi_fortran_integer2_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
OP_FUNC(prod, fortran_integer4, ompi_fortran_integer4_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
OP_FUNC(prod, fortran_integer8, ompi_fortran_integer8_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
OP_FUNC(prod, fortran_integer16, ompi_fortran_integer16_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Floating point */
|
|
|
|
OP_FUNC(prod, float, float, *=)
|
|
|
|
OP_FUNC(prod, double, double, *=)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
OP_FUNC(prod, long_double, long double, *=)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
OP_FUNC(prod, fortran_real, ompi_fortran_real_t, *=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
OP_FUNC(prod, fortran_double_precision, ompi_fortran_double_precision_t, *=)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
OP_FUNC(prod, fortran_real2, ompi_fortran_real2_t, *=)
|
2007-06-19 09:03:11 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
OP_FUNC(prod, fortran_real4, ompi_fortran_real4_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
OP_FUNC(prod, fortran_real8, ompi_fortran_real8_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
OP_FUNC(prod, fortran_real16, ompi_fortran_real16_t, *=)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Complex */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_PROD(fortran_complex, ompi_fortran_complex_t)
|
2005-10-12 17:19:46 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_PROD(fortran_double_complex, ompi_fortran_double_complex_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4 && OMPI_HAVE_FORTRAN_COMPLEX8
|
|
|
|
COMPLEX_OP_FUNC_PROD(fortran_complex8, ompi_fortran_complex8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8 && OMPI_HAVE_FORTRAN_COMPLEX16
|
|
|
|
COMPLEX_OP_FUNC_PROD(fortran_complex16, ompi_fortran_complex16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_HAVE_FORTRAN_COMPLEX32
|
|
|
|
COMPLEX_OP_FUNC_PROD(fortran_complex32, ompi_fortran_complex32_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Logical AND
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) && (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(land, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(land, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(land, int, int)
|
|
|
|
FUNC_FUNC(land, long, long)
|
|
|
|
FUNC_FUNC(land, short, short)
|
|
|
|
FUNC_FUNC(land, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(land, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(land, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(land, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(land, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC(land, fortran_logical, ompi_fortran_logical_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2006-03-04 21:35:33 +03:00
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC(land, bool, bool)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Logical OR
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) || (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(lor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(lor, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(lor, int, int)
|
|
|
|
FUNC_FUNC(lor, long, long)
|
|
|
|
FUNC_FUNC(lor, short, short)
|
|
|
|
FUNC_FUNC(lor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(lor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(lor, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(lor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(lor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC(lor, fortran_logical, ompi_fortran_logical_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2006-03-04 21:35:33 +03:00
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC(lor, bool, bool)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Logical XOR
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
2004-07-14 23:16:13 +04:00
|
|
|
#define current_func(a, b) ((a ? 1 : 0) ^ (b ? 1: 0))
|
2004-06-29 04:02:25 +04:00
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(lxor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(lxor, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(lxor, int, int)
|
|
|
|
FUNC_FUNC(lxor, long, long)
|
|
|
|
FUNC_FUNC(lxor, short, short)
|
|
|
|
FUNC_FUNC(lxor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(lxor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(lxor, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(lxor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(lxor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC(lxor, fortran_logical, ompi_fortran_logical_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2006-03-04 21:35:33 +03:00
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC(lxor, bool, bool)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise AND
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) & (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(band, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(band, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(band, int, int)
|
|
|
|
FUNC_FUNC(band, long, long)
|
|
|
|
FUNC_FUNC(band, short, short)
|
|
|
|
FUNC_FUNC(band, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(band, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(band, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(band, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(band, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC(band, fortran_integer, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC(band, fortran_integer1, ompi_fortran_integer1_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC(band, fortran_integer2, ompi_fortran_integer2_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC(band, fortran_integer4, ompi_fortran_integer4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC(band, fortran_integer8, ompi_fortran_integer8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC(band, fortran_integer16, ompi_fortran_integer16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC(band, byte, char)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise OR
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) | (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(bor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(bor, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(bor, int, int)
|
|
|
|
FUNC_FUNC(bor, long, long)
|
|
|
|
FUNC_FUNC(bor, short, short)
|
|
|
|
FUNC_FUNC(bor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(bor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(bor, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(bor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(bor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC(bor, fortran_integer, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC(bor, fortran_integer1, ompi_fortran_integer1_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC(bor, fortran_integer2, ompi_fortran_integer2_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC(bor, fortran_integer4, ompi_fortran_integer4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC(bor, fortran_integer8, ompi_fortran_integer8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC(bor, fortran_integer16, ompi_fortran_integer16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC(bor, byte, char)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise XOR
|
|
|
|
*************************************************************************/
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) ^ (b))
|
|
|
|
/* C integer */
|
2006-03-09 19:51:59 +03:00
|
|
|
FUNC_FUNC(bxor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC(bxor, signed_char, signed char)
|
2004-06-29 04:02:25 +04:00
|
|
|
FUNC_FUNC(bxor, int, int)
|
|
|
|
FUNC_FUNC(bxor, long, long)
|
|
|
|
FUNC_FUNC(bxor, short, short)
|
|
|
|
FUNC_FUNC(bxor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC(bxor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC(bxor, unsigned_long, unsigned long)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC(bxor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC(bxor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC(bxor, fortran_integer, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC(bxor, fortran_integer1, ompi_fortran_integer1_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC(bxor, fortran_integer2, ompi_fortran_integer2_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC(bxor, fortran_integer4, ompi_fortran_integer4_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC(bxor, fortran_integer8, ompi_fortran_integer8_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC(bxor, fortran_integer16, ompi_fortran_integer16_t)
|
A bunch of changes to support MPI_INTEGER*x, MPI_REAL*x,
MPI_COMPLEX*x, and some optional C datatypes in MPI reduction
operations. These types are not technically supported by the letter
of the MPI standard, but are implied by the spirit of it (and there
are definitely users that use them in real applications)
- Add checks in configure for back-end C types for MPI_INTEGER*x and
MPI_REAL*x
- Create C data structs for MPI_COMPLEX*x
- Fixed typo for MPI_INTEGER8 in mpi.h
- Updated configure macros to create MPI_FORTRAN_INTEGER* defines, as
opposed to MPI_FORTRAN_INT, which was causing [me] lots of confusion
(between C "*_INT" names and Fortran "*_INT" names). This caused
some trivial updates in ddt, ompi_info, and the MPI layer to match.
- Update ompi_info to show whether we have each MPI_INTEGER*x,
MPI_REAL*x, and MPI_COMPLEX*x
- Extended reduction operations for optional datatypes:
- "C integer" now includes long long int, long long, and unsigned
long long
- "Fortran integer" now includes MPI_INTEGER*x
- "Floating point" now includes MPI_REAL*x
- "Complex" now includes MPI_COMPLEX*x
This commit was SVN r5511.
2005-04-27 14:23:06 +04:00
|
|
|
#endif
|
2004-06-29 04:02:25 +04:00
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC(bxor, byte, char)
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-07-13 19:20:46 +04:00
|
|
|
/*************************************************************************
|
|
|
|
* Min and max location "pair" datatypes
|
|
|
|
*************************************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
LOC_STRUCT(2real, ompi_fortran_real_t, ompi_fortran_real_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
LOC_STRUCT(2double_precision, ompi_fortran_double_precision_t, ompi_fortran_double_precision_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
LOC_STRUCT(2integer, ompi_fortran_integer_t, ompi_fortran_integer_t)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_STRUCT(float_int, float, int)
|
|
|
|
LOC_STRUCT(double_int, double, int)
|
|
|
|
LOC_STRUCT(long_int, long, int)
|
|
|
|
LOC_STRUCT(2int, int, int)
|
2004-09-30 01:08:29 +04:00
|
|
|
LOC_STRUCT(short_int, short, int)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_STRUCT(long_double_int, long double, int)
|
2007-06-19 02:59:21 +04:00
|
|
|
#endif
|
2004-07-13 19:20:46 +04:00
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Max location
|
|
|
|
*************************************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(maxloc, 2real, >)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(maxloc, 2double_precision, >)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(maxloc, 2integer, >)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(maxloc, float_int, >)
|
|
|
|
LOC_FUNC(maxloc, double_int, >)
|
|
|
|
LOC_FUNC(maxloc, long_int, >)
|
|
|
|
LOC_FUNC(maxloc, 2int, >)
|
|
|
|
LOC_FUNC(maxloc, short_int, >)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(maxloc, long_double_int, >)
|
2007-06-19 02:59:21 +04:00
|
|
|
#endif
|
2004-07-13 19:20:46 +04:00
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Min location
|
|
|
|
*************************************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(minloc, 2real, <)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(minloc, 2double_precision, <)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(minloc, 2integer, <)
|
2005-05-20 03:56:02 +04:00
|
|
|
#endif
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(minloc, float_int, <)
|
|
|
|
LOC_FUNC(minloc, double_int, <)
|
|
|
|
LOC_FUNC(minloc, long_int, <)
|
|
|
|
LOC_FUNC(minloc, 2int, <)
|
|
|
|
LOC_FUNC(minloc, short_int, <)
|
2007-06-19 02:59:21 +04:00
|
|
|
#if HAVE_LONG_DOUBLE
|
2004-07-13 19:20:46 +04:00
|
|
|
LOC_FUNC(minloc, long_double_int, <)
|
2007-06-19 02:59:21 +04:00
|
|
|
#endif
|
2008-03-29 02:45:44 +03:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a three buffer (2 input and 1 output) version of the reduction
|
|
|
|
* routines, needed for some optimizations.
|
|
|
|
*/
|
|
|
|
#define OP_FUNC_3BUF(name, type_name, type, op) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_3buff_##name##_##type_name(void * restrict in1, \
|
2008-12-31 17:50:54 +03:00
|
|
|
void * restrict in2, void * restrict out, int *count, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2008-03-29 02:45:44 +03:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a1 = (type *) in1; \
|
|
|
|
type *a2 = (type *) in2; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i) { \
|
|
|
|
*(b++) = *(a1++) op *(a2++); \
|
|
|
|
} \
|
2008-03-29 02:45:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
#define COMPLEX_OP_FUNC_SUM_3BUF(type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_3buff_sum_##type_name(void * restrict in1, \
|
2008-12-31 17:50:54 +03:00
|
|
|
void * restrict in2, void * restrict out, int *count, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2008-03-29 02:45:44 +03:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a1 = (type *) in1; \
|
|
|
|
type *a2 = (type *) in2; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i, ++b, ++a1, ++a2) { \
|
|
|
|
b->real = a1->real + a2->real; \
|
|
|
|
b->imag = a1->imag + a2->imag; \
|
|
|
|
} \
|
2008-03-29 02:45:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
#define COMPLEX_OP_FUNC_PROD_3BUF(type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_3buff_prod_##type_name(void * restrict in1, \
|
2008-03-29 02:45:44 +03:00
|
|
|
void * restrict in2, void * restrict out, int *count, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2008-03-29 02:45:44 +03:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a1 = (type *) in1; \
|
|
|
|
type *a2 = (type *) in2; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i, ++b, ++a1, ++a2) { \
|
|
|
|
b->real = a1->real * a2->real - a1->imag * a2->imag; \
|
|
|
|
b->imag = a1->imag * a2->real + a1->real * a2->imag; \
|
|
|
|
} \
|
2008-03-29 02:45:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since all the functions in this file are essentially identical, we
|
|
|
|
* use a macro to substitute in names and types. The core operation
|
|
|
|
* in all functions that use this macro is the same.
|
|
|
|
*
|
|
|
|
* This macro is for (out = op(in1, in2))
|
|
|
|
*/
|
|
|
|
#define FUNC_FUNC_3BUF(name, type_name, type) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_3buff_##name##_##type_name(void * restrict in1, \
|
2008-03-29 02:45:44 +03:00
|
|
|
void * restrict in2, void * restrict out, int *count, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2008-03-29 02:45:44 +03:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
type *a1 = (type *) in1; \
|
|
|
|
type *a2 = (type *) in2; \
|
|
|
|
type *b = (type *) out; \
|
|
|
|
for (i = 0; i < *count; ++i) { \
|
|
|
|
*(b) = current_func(*(a1), *(a2)); \
|
|
|
|
++b; \
|
|
|
|
++a1; \
|
|
|
|
++a2; \
|
|
|
|
} \
|
2008-03-29 02:45:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since all the functions in this file are essentially identical, we
|
|
|
|
* use a macro to substitute in names and types. The core operation
|
|
|
|
* in all functions that use this macro is the same.
|
|
|
|
*
|
|
|
|
* This macro is for minloc and maxloc
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
#define LOC_STRUCT(type_name, type1, type2) \
|
|
|
|
typedef struct { \
|
2008-12-31 17:50:54 +03:00
|
|
|
type1 v; \
|
|
|
|
type2 k; \
|
2008-03-29 02:45:44 +03:00
|
|
|
} ompi_op_predefined_##type_name##_t;
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define LOC_FUNC_3BUF(name, type_name, op) \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
void ompi_op_base_3buff_##name##_##type_name(void * restrict in1, \
|
2008-03-29 02:45:44 +03:00
|
|
|
void * restrict in2, void * restrict out, int *count, \
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
struct ompi_datatype_t **dtype, \
|
|
|
|
struct ompi_op_base_module_1_0_0_t *module) \
|
2008-03-29 02:45:44 +03:00
|
|
|
{ \
|
2008-12-31 17:50:54 +03:00
|
|
|
int i; \
|
|
|
|
ompi_op_predefined_##type_name##_t *a1 = (ompi_op_predefined_##type_name##_t*) in1; \
|
|
|
|
ompi_op_predefined_##type_name##_t *a2 = (ompi_op_predefined_##type_name##_t*) in2; \
|
|
|
|
ompi_op_predefined_##type_name##_t *b = (ompi_op_predefined_##type_name##_t*) out; \
|
|
|
|
for (i = 0; i < *count; ++i, ++a1, ++a2, ++b ) { \
|
|
|
|
if (a1->v op a2->v) { \
|
|
|
|
b->v = a1->v; \
|
|
|
|
b->k = a1->k; \
|
|
|
|
} else if (a1->v == a2->v) { \
|
|
|
|
b->v = a1->v; \
|
|
|
|
b->k = (a2->k < a1->k ? a2->k : a1->k); \
|
|
|
|
} else { \
|
|
|
|
b->v = a2->v; \
|
|
|
|
b->k = a2->k; \
|
|
|
|
} \
|
|
|
|
} \
|
2008-03-29 02:45:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Max
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) > (b) ? (a) : (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(max, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(max, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(max, int, int)
|
|
|
|
FUNC_FUNC_3BUF(max, long, long)
|
|
|
|
FUNC_FUNC_3BUF(max, short, short)
|
|
|
|
FUNC_FUNC_3BUF(max, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(max, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(max, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(max, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(max, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer1, ompi_fortran_integer1_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer2, ompi_fortran_integer2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer4, ompi_fortran_integer4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer8, ompi_fortran_integer8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_integer16, ompi_fortran_integer16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Floating point */
|
|
|
|
FUNC_FUNC_3BUF(max, float, float)
|
|
|
|
FUNC_FUNC_3BUF(max, double, double)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
FUNC_FUNC_3BUF(max, long_double, long double)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_real, ompi_fortran_real_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_double_precision, ompi_fortran_double_precision_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_real2, ompi_fortran_real2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_real4, ompi_fortran_real4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_real8, ompi_fortran_real8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
FUNC_FUNC_3BUF(max, fortran_real16, ompi_fortran_real16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Min
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) < (b) ? (a) : (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(min, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(min, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(min, int, int)
|
|
|
|
FUNC_FUNC_3BUF(min, long, long)
|
|
|
|
FUNC_FUNC_3BUF(min, short, short)
|
|
|
|
FUNC_FUNC_3BUF(min, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(min, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(min, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(min, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(min, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer1, ompi_fortran_integer1_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer2, ompi_fortran_integer2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer4, ompi_fortran_integer4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer8, ompi_fortran_integer8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_integer16, ompi_fortran_integer16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Floating point */
|
|
|
|
FUNC_FUNC_3BUF(min, float, float)
|
|
|
|
FUNC_FUNC_3BUF(min, double, double)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
FUNC_FUNC_3BUF(min, long_double, long double)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_real, ompi_fortran_real_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_double_precision, ompi_fortran_double_precision_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_real2, ompi_fortran_real2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_real4, ompi_fortran_real4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_real8, ompi_fortran_real8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
FUNC_FUNC_3BUF(min, fortran_real16, ompi_fortran_real16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Sum
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
/* C integer */
|
|
|
|
OP_FUNC_3BUF(sum, signed_char, signed char, +)
|
|
|
|
OP_FUNC_3BUF(sum, unsigned_char, unsigned char, +)
|
|
|
|
OP_FUNC_3BUF(sum, int, int, +)
|
|
|
|
OP_FUNC_3BUF(sum, long, long, +)
|
|
|
|
OP_FUNC_3BUF(sum, short, short, +)
|
|
|
|
OP_FUNC_3BUF(sum, unsigned_short, unsigned short, +)
|
|
|
|
OP_FUNC_3BUF(sum, unsigned, unsigned, +)
|
|
|
|
OP_FUNC_3BUF(sum, unsigned_long, unsigned long, +)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
OP_FUNC_3BUF(sum, long_long_int, long long int, +)
|
|
|
|
OP_FUNC_3BUF(sum, unsigned_long_long, unsigned long long, +)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer, ompi_fortran_integer_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer1, ompi_fortran_integer1_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer2, ompi_fortran_integer2_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer4, ompi_fortran_integer4_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer8, ompi_fortran_integer8_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
OP_FUNC_3BUF(sum, fortran_integer16, ompi_fortran_integer16_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Floating point */
|
|
|
|
OP_FUNC_3BUF(sum, float, float, +)
|
|
|
|
OP_FUNC_3BUF(sum, double, double, +)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
OP_FUNC_3BUF(sum, long_double, long double, +)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
OP_FUNC_3BUF(sum, fortran_real, ompi_fortran_real_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
OP_FUNC_3BUF(sum, fortran_double_precision, ompi_fortran_double_precision_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
OP_FUNC_3BUF(sum, fortran_real2, ompi_fortran_real2_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
OP_FUNC_3BUF(sum, fortran_real4, ompi_fortran_real4_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
OP_FUNC_3BUF(sum, fortran_real8, ompi_fortran_real8_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
OP_FUNC_3BUF(sum, fortran_real16, ompi_fortran_real16_t, +)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Complex */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_SUM_3BUF(fortran_complex, ompi_fortran_complex_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_SUM_3BUF(fortran_double_complex, ompi_fortran_double_complex_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4 && OMPI_HAVE_FORTRAN_COMPLEX8
|
|
|
|
COMPLEX_OP_FUNC_SUM_3BUF(fortran_complex8, ompi_fortran_complex8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8 && OMPI_HAVE_FORTRAN_COMPLEX16
|
|
|
|
COMPLEX_OP_FUNC_SUM_3BUF(fortran_complex16, ompi_fortran_complex16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_HAVE_FORTRAN_COMPLEX32
|
|
|
|
COMPLEX_OP_FUNC_SUM_3BUF(fortran_complex32, ompi_fortran_complex32_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Product
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
/* C integer */
|
|
|
|
OP_FUNC_3BUF(prod, signed_char, signed char, *)
|
|
|
|
OP_FUNC_3BUF(prod, unsigned_char, unsigned char, *)
|
|
|
|
OP_FUNC_3BUF(prod, int, int, *)
|
|
|
|
OP_FUNC_3BUF(prod, long, long, *)
|
|
|
|
OP_FUNC_3BUF(prod, short, short, *)
|
|
|
|
OP_FUNC_3BUF(prod, unsigned_short, unsigned short, *)
|
|
|
|
OP_FUNC_3BUF(prod, unsigned, unsigned, *)
|
|
|
|
OP_FUNC_3BUF(prod, unsigned_long, unsigned long, *)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
OP_FUNC_3BUF(prod, long_long_int, long long int, *)
|
|
|
|
OP_FUNC_3BUF(prod, unsigned_long_long, unsigned long long, *)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer, ompi_fortran_integer_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer1, ompi_fortran_integer1_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer2, ompi_fortran_integer2_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer4, ompi_fortran_integer4_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer8, ompi_fortran_integer8_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
OP_FUNC_3BUF(prod, fortran_integer16, ompi_fortran_integer16_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Floating point */
|
|
|
|
OP_FUNC_3BUF(prod, float, float, *)
|
|
|
|
OP_FUNC_3BUF(prod, double, double, *)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
OP_FUNC_3BUF(prod, long_double, long double, *)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
OP_FUNC_3BUF(prod, fortran_real, ompi_fortran_real_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
OP_FUNC_3BUF(prod, fortran_double_precision, ompi_fortran_double_precision_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
|
|
|
OP_FUNC_3BUF(prod, fortran_real2, ompi_fortran_real2_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
|
|
|
OP_FUNC_3BUF(prod, fortran_real4, ompi_fortran_real4_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
|
|
|
OP_FUNC_3BUF(prod, fortran_real8, ompi_fortran_real8_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16
|
|
|
|
OP_FUNC_3BUF(prod, fortran_real16, ompi_fortran_real16_t, *)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Complex */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_PROD_3BUF(fortran_complex, ompi_fortran_complex_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION && OMPI_HAVE_FORTRAN_COMPLEX
|
|
|
|
COMPLEX_OP_FUNC_PROD_3BUF(fortran_double_complex, ompi_fortran_double_complex_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4 && OMPI_HAVE_FORTRAN_COMPLEX8
|
|
|
|
COMPLEX_OP_FUNC_PROD_3BUF(fortran_complex8, ompi_fortran_complex8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8 && OMPI_HAVE_FORTRAN_COMPLEX16
|
|
|
|
COMPLEX_OP_FUNC_PROD_3BUF(fortran_complex16, ompi_fortran_complex16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_HAVE_FORTRAN_COMPLEX32
|
|
|
|
COMPLEX_OP_FUNC_PROD_3BUF(fortran_complex32, ompi_fortran_complex32_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Logical AND
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) && (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(land, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(land, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(land, int, int)
|
|
|
|
FUNC_FUNC_3BUF(land, long, long)
|
|
|
|
FUNC_FUNC_3BUF(land, short, short)
|
|
|
|
FUNC_FUNC_3BUF(land, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(land, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(land, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(land, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(land, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC_3BUF(land, fortran_logical, ompi_fortran_logical_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC_3BUF(land, bool, bool)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Logical OR
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) || (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(lor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(lor, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(lor, int, int)
|
|
|
|
FUNC_FUNC_3BUF(lor, long, long)
|
|
|
|
FUNC_FUNC_3BUF(lor, short, short)
|
|
|
|
FUNC_FUNC_3BUF(lor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(lor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(lor, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(lor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(lor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC_3BUF(lor, fortran_logical, ompi_fortran_logical_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC_3BUF(lor, bool, bool)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Logical XOR
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a ? 1 : 0) ^ (b ? 1: 0))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(lxor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(lxor, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(lxor, int, int)
|
|
|
|
FUNC_FUNC_3BUF(lxor, long, long)
|
|
|
|
FUNC_FUNC_3BUF(lxor, short, short)
|
|
|
|
FUNC_FUNC_3BUF(lxor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(lxor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(lxor, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(lxor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(lxor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Logical */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
|
|
|
FUNC_FUNC_3BUF(lxor, fortran_logical, ompi_fortran_logical_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* C++ bool */
|
|
|
|
FUNC_FUNC_3BUF(lxor, bool, bool)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise AND
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) & (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(band, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(band, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(band, int, int)
|
|
|
|
FUNC_FUNC_3BUF(band, long, long)
|
|
|
|
FUNC_FUNC_3BUF(band, short, short)
|
|
|
|
FUNC_FUNC_3BUF(band, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(band, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(band, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(band, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(band, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer1, ompi_fortran_integer1_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer2, ompi_fortran_integer2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer4, ompi_fortran_integer4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer8, ompi_fortran_integer8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC_3BUF(band, fortran_integer16, ompi_fortran_integer16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC_3BUF(band, byte, char)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise OR
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) | (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(bor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(bor, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(bor, int, int)
|
|
|
|
FUNC_FUNC_3BUF(bor, long, long)
|
|
|
|
FUNC_FUNC_3BUF(bor, short, short)
|
|
|
|
FUNC_FUNC_3BUF(bor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(bor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(bor, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(bor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(bor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer1, ompi_fortran_integer1_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer2, ompi_fortran_integer2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer4, ompi_fortran_integer4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer8, ompi_fortran_integer8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC_3BUF(bor, fortran_integer16, ompi_fortran_integer16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC_3BUF(bor, byte, char)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Bitwise XOR
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
#undef current_func
|
|
|
|
#define current_func(a, b) ((a) ^ (b))
|
|
|
|
/* C integer */
|
|
|
|
FUNC_FUNC_3BUF(bxor, unsigned_char, unsigned char)
|
|
|
|
FUNC_FUNC_3BUF(bxor, signed_char, signed char)
|
|
|
|
FUNC_FUNC_3BUF(bxor, int, int)
|
|
|
|
FUNC_FUNC_3BUF(bxor, long, long)
|
|
|
|
FUNC_FUNC_3BUF(bxor, short, short)
|
|
|
|
FUNC_FUNC_3BUF(bxor, unsigned_short, unsigned short)
|
|
|
|
FUNC_FUNC_3BUF(bxor, unsigned, unsigned)
|
|
|
|
FUNC_FUNC_3BUF(bxor, unsigned_long, unsigned long)
|
|
|
|
#if HAVE_LONG_LONG
|
|
|
|
FUNC_FUNC_3BUF(bxor, long_long_int, long long int)
|
|
|
|
FUNC_FUNC_3BUF(bxor, unsigned_long_long, unsigned long long)
|
|
|
|
#endif
|
|
|
|
/* Fortran integer */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer1, ompi_fortran_integer1_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer2, ompi_fortran_integer2_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer4, ompi_fortran_integer4_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer8, ompi_fortran_integer8_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
|
|
|
FUNC_FUNC_3BUF(bxor, fortran_integer16, ompi_fortran_integer16_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
/* Byte */
|
|
|
|
FUNC_FUNC_3BUF(bxor, byte, char)
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Min and max location "pair" datatypes
|
|
|
|
*************************************************************************/
|
|
|
|
|
|
|
|
/*
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
|
|
|
LOC_STRUCT_3BUF(2real, ompi_fortran_real_t, ompi_fortran_real_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
|
|
|
LOC_STRUCT_3BUF(2double_precision, ompi_fortran_double_precision_t, ompi_fortran_double_precision_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
|
|
|
LOC_STRUCT_3BUF(2integer, ompi_fortran_integer_t, ompi_fortran_integer_t)
|
2008-03-29 02:45:44 +03:00
|
|
|
#endif
|
|
|
|
LOC_STRUCT_3BUF(float_int, float, int)
|
|
|
|
LOC_STRUCT_3BUF(double_int, double, int)
|
|
|
|
LOC_STRUCT_3BUF(long_int, long, int)
|
|
|
|
LOC_STRUCT_3BUF(2int, int, int)
|
|
|
|
LOC_STRUCT_3BUF(short_int, short, int)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
LOC_STRUCT_3BUF(long_double_int, long double, int)
|
|
|
|
#endif
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Max location
|
|
|
|
*************************************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(maxloc, 2real, >)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(maxloc, 2double_precision, >)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(maxloc, 2integer, >)
|
|
|
|
#endif
|
|
|
|
LOC_FUNC_3BUF(maxloc, float_int, >)
|
|
|
|
LOC_FUNC_3BUF(maxloc, double_int, >)
|
|
|
|
LOC_FUNC_3BUF(maxloc, long_int, >)
|
|
|
|
LOC_FUNC_3BUF(maxloc, 2int, >)
|
|
|
|
LOC_FUNC_3BUF(maxloc, short_int, >)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
LOC_FUNC_3BUF(maxloc, long_double_int, >)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*************************************************************************
|
|
|
|
* Min location
|
|
|
|
*************************************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(minloc, 2real, <)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(minloc, 2double_precision, <)
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
2008-03-29 02:45:44 +03:00
|
|
|
LOC_FUNC_3BUF(minloc, 2integer, <)
|
|
|
|
#endif
|
|
|
|
LOC_FUNC_3BUF(minloc, float_int, <)
|
|
|
|
LOC_FUNC_3BUF(minloc, double_int, <)
|
|
|
|
LOC_FUNC_3BUF(minloc, long_int, <)
|
|
|
|
LOC_FUNC_3BUF(minloc, 2int, <)
|
|
|
|
LOC_FUNC_3BUF(minloc, short_int, <)
|
|
|
|
#if HAVE_LONG_DOUBLE
|
|
|
|
LOC_FUNC_3BUF(minloc, long_double_int, <)
|
|
|
|
#endif
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Helpful defines, because there's soooo many names!
|
|
|
|
*
|
|
|
|
* **NOTE** These #define's are strictly ordered! A series of macros
|
|
|
|
* are built up to assemble a list of function names (or NULLs) that
|
|
|
|
* are put into the intrinsict ompi_op_t's in the middle of this file.
|
|
|
|
* The order of these function names is critical, and must be the same
|
|
|
|
* as the OMPI_OP_BASE_TYPE_* enums in ompi/mca/op/op.h (i.e., the
|
|
|
|
* enum's starting with OMPI_OP_BASE_TYPE_UNSIGNED_CHAR).
|
|
|
|
*/
|
|
|
|
|
|
|
|
/** C integer ***********************************************************/
|
|
|
|
|
|
|
|
#ifdef HAVE_LONG_LONG
|
|
|
|
#define C_INTEGER_LONG_LONG(name) \
|
|
|
|
ompi_op_base_##name##_long_long_int, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
ompi_op_base_##name##_unsigned_long_long /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
#define C_INTEGER_LONG_LONG_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_long_long_int, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_unsigned_long_long /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
#else
|
|
|
|
#define C_INTEGER_LONG_LONG(name) \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
#define C_INTEGER_LONG_LONG_3BUFF(name) \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define C_INTEGER(name) \
|
|
|
|
ompi_op_base_##name##_unsigned_char, /* OMPI_OP_BASE_TYPE_UNSIGNED_CHAR */ \
|
|
|
|
ompi_op_base_##name##_signed_char, /* OMPI_OP_BASE_TYPE_SIGNED_CHAR */ \
|
|
|
|
ompi_op_base_##name##_int, /* OMPI_OP_BASE_TYPE_INT */ \
|
|
|
|
ompi_op_base_##name##_long, /* OMPI_OP_BASE_TYPE_LONG */ \
|
|
|
|
ompi_op_base_##name##_short, /* OMPI_OP_BASE_TYPE_SHORT */ \
|
|
|
|
ompi_op_base_##name##_unsigned_short, /* OMPI_OP_BASE_TYPE_UNSIGNED_SHORT */ \
|
|
|
|
ompi_op_base_##name##_unsigned, /* OMPI_OP_BASE_TYPE_UNSIGNED */ \
|
|
|
|
ompi_op_base_##name##_unsigned_long, /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG */ \
|
|
|
|
C_INTEGER_LONG_LONG(name)
|
|
|
|
#define C_INTEGER_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_unsigned_char, /* OMPI_OP_BASE_TYPE_UNSIGNED_CHAR */ \
|
|
|
|
ompi_op_base_3buff_##name##_signed_char, /* OMPI_OP_BASE_TYPE_SIGNED_CHAR */ \
|
|
|
|
ompi_op_base_3buff_##name##_int, /* OMPI_OP_BASE_TYPE_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_long, /* OMPI_OP_BASE_TYPE_LONG */ \
|
|
|
|
ompi_op_base_3buff_##name##_short, /* OMPI_OP_BASE_TYPE_SHORT */ \
|
|
|
|
ompi_op_base_3buff_##name##_unsigned_short, /* OMPI_OP_BASE_TYPE_UNSIGNED_SHORT */ \
|
|
|
|
ompi_op_base_3buff_##name##_unsigned, /* OMPI_OP_BASE_TYPE_UNSIGNED */ \
|
|
|
|
ompi_op_base_3buff_##name##_unsigned_long, /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG */ \
|
|
|
|
C_INTEGER_LONG_LONG_3BUFF(name)
|
|
|
|
|
|
|
|
#define C_INTEGER_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_CHAR */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SIGNED_CHAR */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SHORT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_SHORT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
|
|
|
|
#define C_INTEGER_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_CHAR */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SIGNED_CHAR */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SHORT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_SHORT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_LONG_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG */
|
|
|
|
|
|
|
|
/** All the Fortran integers ********************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER_PLAIN(name) ompi_op_base_##name##_fortran_integer
|
|
|
|
#define FORTRAN_INTEGER_PLAIN_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER_PLAIN(name) NULL
|
|
|
|
#define FORTRAN_INTEGER_PLAIN_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER1
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER1(name) ompi_op_base_##name##_fortran_integer1
|
|
|
|
#define FORTRAN_INTEGER1_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer1
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER1(name) NULL
|
|
|
|
#define FORTRAN_INTEGER1_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER2
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER2(name) ompi_op_base_##name##_fortran_integer2
|
|
|
|
#define FORTRAN_INTEGER2_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer2
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER2(name) NULL
|
|
|
|
#define FORTRAN_INTEGER2_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER4
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER4(name) ompi_op_base_##name##_fortran_integer4
|
|
|
|
#define FORTRAN_INTEGER4_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer4
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER4(name) NULL
|
|
|
|
#define FORTRAN_INTEGER4_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER8
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER8(name) ompi_op_base_##name##_fortran_integer8
|
|
|
|
#define FORTRAN_INTEGER8_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer8
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER8(name) NULL
|
|
|
|
#define FORTRAN_INTEGER8_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER16
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_INTEGER16(name) ompi_op_base_##name##_fortran_integer16
|
|
|
|
#define FORTRAN_INTEGER16_3BUFF(name) ompi_op_base_3buff_##name##_fortran_integer16
|
|
|
|
#else
|
|
|
|
#define FORTRAN_INTEGER16(name) NULL
|
|
|
|
#define FORTRAN_INTEGER16_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
#define FORTRAN_INTEGER(name) \
|
|
|
|
FORTRAN_INTEGER_PLAIN(name), /* OMPI_OP_BASE_TYPE_INTEGER */ \
|
|
|
|
FORTRAN_INTEGER1(name), /* OMPI_OP_BASE_TYPE_INTEGER1 */ \
|
|
|
|
FORTRAN_INTEGER2(name), /* OMPI_OP_BASE_TYPE_INTEGER2 */ \
|
|
|
|
FORTRAN_INTEGER4(name), /* OMPI_OP_BASE_TYPE_INTEGER4 */ \
|
|
|
|
FORTRAN_INTEGER8(name), /* OMPI_OP_BASE_TYPE_INTEGER8 */ \
|
|
|
|
FORTRAN_INTEGER16(name) /* OMPI_OP_BASE_TYPE_INTEGER16 */
|
|
|
|
|
|
|
|
#define FORTRAN_INTEGER_3BUFF(name) \
|
|
|
|
FORTRAN_INTEGER_PLAIN_3BUFF(name), /* OMPI_OP_BASE_TYPE_INTEGER */ \
|
|
|
|
FORTRAN_INTEGER1_3BUFF(name), /* OMPI_OP_BASE_TYPE_INTEGER1 */ \
|
|
|
|
FORTRAN_INTEGER2_3BUFF(name), /* OMPI_OP_BASE_TYPE_INTEGER2 */ \
|
|
|
|
FORTRAN_INTEGER4_3BUFF(name), /* OMPI_OP_BASE_TYPE_INTEGER4 */ \
|
|
|
|
FORTRAN_INTEGER8_3BUFF(name), /* OMPI_OP_BASE_TYPE_INTEGER8 */ \
|
|
|
|
FORTRAN_INTEGER16_3BUFF(name) /* OMPI_OP_BASE_TYPE_INTEGER16 */
|
|
|
|
|
|
|
|
#define FORTRAN_INTEGER_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER1 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER2 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER4 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER8 */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_INTEGER16 */
|
|
|
|
|
|
|
|
#define FORTRAN_INTEGER_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER1 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER2 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER4 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_INTEGER8 */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_INTEGER16 */
|
|
|
|
|
|
|
|
/** All the Fortran reals ***********************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_REAL_PLAIN(name) ompi_op_base_##name##_fortran_real
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL_PLAIN_3BUFF(name) ompi_op_base_3buff_##name##_fortran_real
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL_PLAIN(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL_PLAIN_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL2
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_REAL2(name) ompi_op_base_##name##_fortran_real2
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL2_3BUFF(name) ompi_op_base_3buff_##name##_fortran_real2
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL2(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL2_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_REAL4(name) ompi_op_base_##name##_fortran_real4
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL4_3BUFF(name) ompi_op_base_3buff_##name##_fortran_real4
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL4(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL4_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_REAL8(name) ompi_op_base_##name##_fortran_real8
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL8_3BUFF(name) ompi_op_base_3buff_##name##_fortran_real8
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL8(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL8_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
/* If:
|
|
|
|
- we have fortran REAL*16, *and*
|
|
|
|
- fortran REAL*16 matches the bit representation of the
|
|
|
|
corresponding C type
|
|
|
|
Only then do we put in function pointers for REAL*16 reductions.
|
|
|
|
Otherwise, just put in NULL. */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_REAL16_MATCHES_C
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_REAL16(name) ompi_op_base_##name##_fortran_real16
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL16_3BUFF(name) ompi_op_base_3buff_##name##_fortran_real16
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL16(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL16_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL(name) \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL_PLAIN(name), /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL2(name), /* OMPI_OP_BASE_TYPE_REAL2 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL4(name), /* OMPI_OP_BASE_TYPE_REAL4 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL8(name), /* OMPI_OP_BASE_TYPE_REAL8 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL16(name) /* OMPI_OP_BASE_TYPE_REAL16 */
|
|
|
|
|
|
|
|
#define FLOATING_POINT_FORTRAN_REAL_3BUFF(name) \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL_PLAIN_3BUFF(name), /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL2_3BUFF(name), /* OMPI_OP_BASE_TYPE_REAL2 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL4_3BUFF(name), /* OMPI_OP_BASE_TYPE_REAL4 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL8_3BUFF(name), /* OMPI_OP_BASE_TYPE_REAL8 */ \
|
|
|
|
FLOATING_POINT_FORTRAN_REAL16_3BUFF(name) /* OMPI_OP_BASE_TYPE_REAL16 */
|
|
|
|
|
|
|
|
/** Fortran double precision ********************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FLOATING_POINT_FORTRAN_DOUBLE_PRECISION(name) \
|
|
|
|
ompi_op_base_##name##_fortran_double_precision
|
|
|
|
#define FLOATING_POINT_FORTRAN_DOUBLE_PRECISION_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_fortran_double_precision
|
|
|
|
#else
|
|
|
|
#define FLOATING_POINT_FORTRAN_DOUBLE_PRECISION(name) NULL
|
|
|
|
#define FLOATING_POINT_FORTRAN_DOUBLE_PRECISION_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/** Floating point, including all the Fortran reals *********************/
|
|
|
|
|
|
|
|
#define FLOATING_POINT(name) \
|
|
|
|
ompi_op_base_##name##_float, /* OMPI_OP_BASE_TYPE_FLOAT */\
|
|
|
|
ompi_op_base_##name##_double, /* OMPI_OP_BASE_TYPE_DOUBLE */\
|
|
|
|
FLOATING_POINT_FORTRAN_REAL(name), /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
FLOATING_POINT_FORTRAN_DOUBLE_PRECISION(name), /* OMPI_OP_BASE_TYPE_DOUBLE_PRECISION */ \
|
|
|
|
ompi_op_base_##name##_long_double /* OMPI_OP_BASE_TYPE_LONG_DOUBLE */
|
|
|
|
|
|
|
|
#define FLOATING_POINT_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_float, /* OMPI_OP_BASE_TYPE_FLOAT */\
|
|
|
|
ompi_op_base_3buff_##name##_double, /* OMPI_OP_BASE_TYPE_DOUBLE */\
|
|
|
|
FLOATING_POINT_FORTRAN_REAL_3BUFF(name), /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
FLOATING_POINT_FORTRAN_DOUBLE_PRECISION_3BUFF(name), /* OMPI_OP_BASE_TYPE_DOUBLE_PRECISION */ \
|
|
|
|
ompi_op_base_3buff_##name##_long_double /* OMPI_OP_BASE_TYPE_LONG_DOUBLE */
|
|
|
|
|
|
|
|
#define FLOATING_POINT_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_FLOAT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL2 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL4 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL8 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL16 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_PRECISION */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_LONG_DOUBLE */
|
|
|
|
|
|
|
|
#define FLOATING_POINT_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_FLOAT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL2 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL4 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL8 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_REAL16 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_PRECISION */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_LONG_DOUBLE */
|
|
|
|
|
|
|
|
/** Fortran logical *****************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_LOGICAL
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define FORTRAN_LOGICAL(name) \
|
|
|
|
ompi_op_base_##name##_fortran_logical /* OMPI_OP_BASE_TYPE_LOGICAL */
|
|
|
|
#define FORTRAN_LOGICAL_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_fortran_logical /* OMPI_OP_BASE_TYPE_LOGICAL */
|
|
|
|
#else
|
|
|
|
#define FORTRAN_LOGICAL(name) NULL
|
|
|
|
#define FORTRAN_LOGICAL_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
#define LOGICAL(name) \
|
|
|
|
FORTRAN_LOGICAL(name), \
|
|
|
|
ompi_op_base_##name##_bool /* OMPI_OP_BASE_TYPE_BOOL */
|
|
|
|
#define LOGICAL_3BUFF(name) \
|
|
|
|
FORTRAN_LOGICAL_3BUFF(name), \
|
|
|
|
ompi_op_base_3buff_##name##_bool /* OMPI_OP_BASE_TYPE_BOOL */
|
|
|
|
|
|
|
|
#define LOGICAL_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LOGICAL */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_BOOL */
|
|
|
|
|
|
|
|
#define LOGICAL_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LOGICAL */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_BOOL */
|
|
|
|
|
|
|
|
/** Fortran complex *****************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL && OMPI_HAVE_FORTRAN_COMPLEX
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define COMPLEX_PLAIN(name) ompi_op_base_##name##_fortran_complex
|
|
|
|
#define COMPLEX_PLAIN_3BUFF(name) ompi_op_base_3buff_##name##_fortran_complex
|
|
|
|
#else
|
|
|
|
#define COMPLEX_PLAIN(name) NULL
|
|
|
|
#define COMPLEX_PLAIN_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION && OMPI_HAVE_FORTRAN_COMPLEX
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define COMPLEX_DOUBLE(name) ompi_op_base_##name##_fortran_double_complex
|
|
|
|
#define COMPLEX_DOUBLE_3BUFF(name) ompi_op_base_3buff_##name##_fortran_double_complex
|
|
|
|
#else
|
|
|
|
#define COMPLEX_DOUBLE(name) NULL
|
|
|
|
#define COMPLEX_DOUBLE_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL4 && OMPI_HAVE_FORTRAN_COMPLEX8
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define COMPLEX8(name) ompi_op_base_##name##_fortran_complex8
|
|
|
|
#define COMPLEX8_3BUFF(name) ompi_op_base_3buff_##name##_fortran_complex8
|
|
|
|
#else
|
|
|
|
#define COMPLEX8(name) NULL
|
|
|
|
#define COMPLEX8_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL8 && OMPI_HAVE_FORTRAN_COMPLEX16
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define COMPLEX16(name) ompi_op_base_##name##_fortran_complex16
|
|
|
|
#define COMPLEX16_3BUFF(name) ompi_op_base_3buff_##name##_fortran_complex16
|
|
|
|
#else
|
|
|
|
#define COMPLEX16(name) NULL
|
|
|
|
#define COMPLEX16_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
/* If:
|
|
|
|
- we have fortran REAL*16, *and*
|
|
|
|
- fortran REAL*16 matches the bit representation of the
|
|
|
|
corresponding C type, *and*
|
|
|
|
- we have fortran COMPILEX*32
|
|
|
|
Only then do we put in function pointers for COMPLEX*32 reductions.
|
|
|
|
Otherwise, just put in NULL. */
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL16 && OMPI_REAL16_MATCHES_C && OMPI_HAVE_FORTRAN_COMPLEX32
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define COMPLEX32(name) ompi_op_base_##name##_fortran_complex32
|
|
|
|
#define COMPLEX32_3BUFF(name) ompi_op_base_3buff_##name##_fortran_complex32
|
|
|
|
#else
|
|
|
|
#define COMPLEX32(name) NULL
|
|
|
|
#define COMPLEX32_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define COMPLEX(name) \
|
|
|
|
COMPLEX_PLAIN(name), /* OMPI_OP_BASE_TYPE_COMPLEX */ \
|
|
|
|
COMPLEX_DOUBLE(name), /* OMPI_OP_BASE_TYPE_DOUBLE_COMPLEX */ \
|
|
|
|
COMPLEX8(name), /* OMPI_OP_BASE_TYPE_COMPLEX8 */ \
|
|
|
|
COMPLEX16(name), /* OMPI_OP_BASE_TYPE_COMPLEX16 */ \
|
|
|
|
COMPLEX32(name) /* OMPI_OP_BASE_TYPE_COMPLEX32 */
|
|
|
|
|
|
|
|
#define COMPLEX_3BUFF(name) \
|
|
|
|
COMPLEX_PLAIN_3BUFF(name), /* OMPI_OP_BASE_TYPE_COMPLEX */ \
|
|
|
|
COMPLEX_DOUBLE_3BUFF(name), /* OMPI_OP_BASE_TYPE_DOUBLE_COMPLEX */ \
|
|
|
|
COMPLEX8_3BUFF(name), /* OMPI_OP_BASE_TYPE_COMPLEX8 */ \
|
|
|
|
COMPLEX16_3BUFF(name), /* OMPI_OP_BASE_TYPE_COMPLEX16 */ \
|
|
|
|
COMPLEX32_3BUFF(name) /* OMPI_OP_BASE_TYPE_COMPLEX32 */
|
|
|
|
|
|
|
|
#define COMPLEX_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_COMPLEX */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX8 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX16 */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_COMPLEX32 */
|
|
|
|
|
|
|
|
#define COMPLEX_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_COMPLEX */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX8 */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_COMPLEX16 */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_COMPLEX32 */
|
|
|
|
|
|
|
|
/** Byte ****************************************************************/
|
|
|
|
|
|
|
|
#define BYTE(name) \
|
|
|
|
ompi_op_base_##name##_byte /* OMPI_OP_BASE_TYPE_BYTE */
|
|
|
|
#define BYTE_3BUFF(name) \
|
|
|
|
ompi_op_base_3buff_##name##_byte /* OMPI_OP_BASE_TYPE_BYTE */
|
|
|
|
|
|
|
|
#define BYTE_NULL \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_BYTE */
|
|
|
|
|
|
|
|
#define BYTE_NULL_3BUFF \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_BYTE */
|
|
|
|
|
|
|
|
/** Fortran complex *****************************************************/
|
|
|
|
/** Fortran "2" types ***************************************************/
|
|
|
|
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_REAL
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define TWOLOC_FORTRAN_2REAL(name) ompi_op_base_##name##_2real
|
|
|
|
#define TWOLOC_FORTRAN_2REAL_3BUFF(name) ompi_op_base_3buff_##name##_2real
|
|
|
|
#else
|
|
|
|
#define TWOLOC_FORTRAN_2REAL(name) NULL
|
|
|
|
#define TWOLOC_FORTRAN_2REAL_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_DOUBLE_PRECISION
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define TWOLOC_FORTRAN_2DOUBLE_PRECISION(name) ompi_op_base_##name##_2double_precision
|
|
|
|
#define TWOLOC_FORTRAN_2DOUBLE_PRECISION_3BUFF(name) ompi_op_base_3buff_##name##_2double_precision
|
|
|
|
#else
|
|
|
|
#define TWOLOC_FORTRAN_2DOUBLE_PRECISION(name) NULL
|
|
|
|
#define TWOLOC_FORTRAN_2DOUBLE_PRECISION_3BUFF(name) NULL
|
|
|
|
#endif
|
2009-06-01 23:02:34 +04:00
|
|
|
#if OMPI_HAVE_FORTRAN_INTEGER
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#define TWOLOC_FORTRAN_2INTEGER(name) ompi_op_base_##name##_2integer
|
|
|
|
#define TWOLOC_FORTRAN_2INTEGER_3BUFF(name) ompi_op_base_3buff_##name##_2integer
|
|
|
|
#else
|
|
|
|
#define TWOLOC_FORTRAN_2INTEGER(name) NULL
|
|
|
|
#define TWOLOC_FORTRAN_2INTEGER_3BUFF(name) NULL
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/** All "2" types *******************************************************/
|
|
|
|
|
|
|
|
#define TWOLOC(name) \
|
|
|
|
TWOLOC_FORTRAN_2REAL(name), /* OMPI_OP_BASE_TYPE_2REAL */ \
|
|
|
|
TWOLOC_FORTRAN_2DOUBLE_PRECISION(name), /* OMPI_OP_BASE_TYPE_2DOUBLE_PRECISION */ \
|
|
|
|
TWOLOC_FORTRAN_2INTEGER(name), /* OMPI_OP_BASE_TYPE_2INTEGER */ \
|
|
|
|
ompi_op_base_##name##_float_int, /* OMPI_OP_BASE_TYPE_FLOAT_INT */ \
|
|
|
|
ompi_op_base_##name##_double_int, /* OMPI_OP_BASE_TYPE_DOUBLE_INT */ \
|
|
|
|
ompi_op_base_##name##_long_int, /* OMPI_OP_BASE_TYPE_LONG_INT */ \
|
|
|
|
ompi_op_base_##name##_2int, /* OMPI_OP_BASE_TYPE_2INT */ \
|
|
|
|
ompi_op_base_##name##_short_int, /* OMPI_OP_BASE_TYPE_SHORT_INT */ \
|
|
|
|
ompi_op_base_##name##_long_double_int /* OMPI_OP_BASE_TYPE_LONG_DOUBLE_INT */
|
|
|
|
|
|
|
|
#define TWOLOC_3BUFF(name) \
|
|
|
|
TWOLOC_FORTRAN_2REAL_3BUFF(name), /* OMPI_OP_BASE_TYPE_2REAL */ \
|
|
|
|
TWOLOC_FORTRAN_2DOUBLE_PRECISION_3BUFF(name), /* OMPI_OP_BASE_TYPE_2DOUBLE_PRECISION */ \
|
|
|
|
TWOLOC_FORTRAN_2INTEGER_3BUFF(name), /* OMPI_OP_BASE_TYPE_2INTEGER */ \
|
|
|
|
ompi_op_base_3buff_##name##_float_int, /* OMPI_OP_BASE_TYPE_FLOAT_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_double_int, /* OMPI_OP_BASE_TYPE_DOUBLE_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_long_int, /* OMPI_OP_BASE_TYPE_LONG_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_2int, /* OMPI_OP_BASE_TYPE_2INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_short_int, /* OMPI_OP_BASE_TYPE_SHORT_INT */ \
|
|
|
|
ompi_op_base_3buff_##name##_long_double_int /* OMPI_OP_BASE_TYPE_LONG_DOUBLE_INT */
|
|
|
|
|
|
|
|
#define TWOLOC_NULL \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2REAL */\
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2DOUBLE_PRECISION */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2INTEGER */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_FLOAT_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SHORT_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_LONG_DOUBLE_INT */
|
|
|
|
|
|
|
|
#define TWOLOC_NULL_3BUFF \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2REAL */\
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2DOUBLE_PRECISION */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2INTEGER */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_FLOAT_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_DOUBLE_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_LONG_INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_2INT */ \
|
|
|
|
NULL, /* OMPI_OP_BASE_TYPE_SHORT_INT */ \
|
|
|
|
NULL /* OMPI_OP_BASE_TYPE_LONG_DOUBLE_INT */
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* MPI_OP_NULL
|
|
|
|
* All types
|
|
|
|
*/
|
|
|
|
#define FLAGS_NO_FLOAT \
|
|
|
|
(OMPI_OP_FLAGS_INTRINSIC | OMPI_OP_FLAGS_ASSOC | OMPI_OP_FLAGS_COMMUTE)
|
|
|
|
#define FLAGS \
|
|
|
|
(OMPI_OP_FLAGS_INTRINSIC | OMPI_OP_FLAGS_ASSOC | \
|
|
|
|
OMPI_OP_FLAGS_FLOAT_ASSOC | OMPI_OP_FLAGS_COMMUTE)
|
|
|
|
|
|
|
|
ompi_op_base_handler_fn_t ompi_op_base_functions[OMPI_OP_BASE_FORTRAN_OP_MAX][OMPI_OP_BASE_TYPE_MAX] =
|
|
|
|
{
|
|
|
|
/* Corresponds to MPI_OP_NULL */
|
|
|
|
{
|
|
|
|
/* Leaving this empty puts in NULL for all entries */
|
|
|
|
NULL,
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MAX */
|
|
|
|
{
|
|
|
|
C_INTEGER(max),
|
|
|
|
FORTRAN_INTEGER(max),
|
|
|
|
FLOATING_POINT(max),
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MIN */
|
|
|
|
{
|
|
|
|
C_INTEGER(min),
|
|
|
|
FORTRAN_INTEGER(min),
|
|
|
|
FLOATING_POINT(min),
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_SUM */
|
|
|
|
{
|
|
|
|
C_INTEGER(sum),
|
|
|
|
FORTRAN_INTEGER(sum),
|
|
|
|
FLOATING_POINT(sum),
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX(sum),
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_PROD */
|
|
|
|
{
|
|
|
|
C_INTEGER(prod),
|
|
|
|
FORTRAN_INTEGER(prod),
|
|
|
|
FLOATING_POINT(prod),
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX(prod),
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LAND */
|
|
|
|
{
|
|
|
|
C_INTEGER(land),
|
|
|
|
FORTRAN_INTEGER_NULL,
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL(land),
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BAND */
|
|
|
|
{
|
|
|
|
C_INTEGER(band),
|
|
|
|
FORTRAN_INTEGER(band),
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE(band),
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LOR */
|
|
|
|
{
|
|
|
|
C_INTEGER(lor),
|
|
|
|
FORTRAN_INTEGER_NULL,
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL(lor),
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BOR */
|
|
|
|
{
|
|
|
|
C_INTEGER(bor),
|
|
|
|
FORTRAN_INTEGER(bor),
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE(bor),
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LXOR */
|
|
|
|
{
|
|
|
|
C_INTEGER(lxor),
|
|
|
|
FORTRAN_INTEGER_NULL,
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL(lxor),
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BXOR */
|
|
|
|
{
|
|
|
|
C_INTEGER(bxor),
|
|
|
|
FORTRAN_INTEGER(bxor),
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE(bxor),
|
|
|
|
TWOLOC_NULL
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MAXLOC */
|
|
|
|
{
|
|
|
|
C_INTEGER_NULL,
|
|
|
|
FORTRAN_INTEGER_NULL,
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC(maxloc),
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MINLOC */
|
|
|
|
{
|
|
|
|
C_INTEGER_NULL,
|
|
|
|
FORTRAN_INTEGER_NULL,
|
|
|
|
FLOATING_POINT_NULL,
|
|
|
|
LOGICAL_NULL,
|
|
|
|
COMPLEX_NULL,
|
|
|
|
BYTE_NULL,
|
|
|
|
TWOLOC(minloc),
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_REPLACE */
|
|
|
|
{
|
|
|
|
/* (MPI_ACCUMULATE is handled differently than the other
|
|
|
|
reductions, so just zero out its function
|
|
|
|
impementations here to ensure that users don't invoke
|
|
|
|
MPI_REPLACE with any reduction operations other than
|
|
|
|
ACCUMULATE) */
|
|
|
|
NULL,
|
|
|
|
},
|
|
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
ompi_op_base_3buff_handler_fn_t ompi_op_base_3buff_functions[OMPI_OP_BASE_FORTRAN_OP_MAX][OMPI_OP_BASE_TYPE_MAX] =
|
|
|
|
{
|
|
|
|
/* Corresponds to MPI_OP_NULL */
|
|
|
|
{
|
|
|
|
/* Leaving this empty puts in NULL for all entries */
|
|
|
|
NULL,
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MAX */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(max),
|
|
|
|
FORTRAN_INTEGER_3BUFF(max),
|
|
|
|
FLOATING_POINT_3BUFF(max),
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MIN */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(min),
|
|
|
|
FORTRAN_INTEGER_3BUFF(min),
|
|
|
|
FLOATING_POINT_3BUFF(min),
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_SUM */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(sum),
|
|
|
|
FORTRAN_INTEGER_3BUFF(sum),
|
|
|
|
FLOATING_POINT_3BUFF(sum),
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_3BUFF(sum),
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_PROD */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(prod),
|
|
|
|
FORTRAN_INTEGER_3BUFF(prod),
|
|
|
|
FLOATING_POINT_3BUFF(prod),
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_3BUFF(prod),
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LAND */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(land),
|
|
|
|
FORTRAN_INTEGER_NULL_3BUFF,
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_3BUFF(land),
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BAND */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(band),
|
|
|
|
FORTRAN_INTEGER_3BUFF(band),
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_3BUFF(band),
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LOR */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(lor),
|
|
|
|
FORTRAN_INTEGER_NULL_3BUFF,
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_3BUFF(lor),
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BOR */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(bor),
|
|
|
|
FORTRAN_INTEGER_3BUFF(bor),
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_3BUFF(bor),
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_LXOR */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(lxor),
|
|
|
|
FORTRAN_INTEGER_NULL_3BUFF,
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_3BUFF(lxor),
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_BXOR */
|
|
|
|
{
|
|
|
|
C_INTEGER_3BUFF(bxor),
|
|
|
|
FORTRAN_INTEGER_3BUFF(bxor),
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_3BUFF(bxor),
|
|
|
|
TWOLOC_NULL_3BUFF
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MAXLOC */
|
|
|
|
{
|
|
|
|
C_INTEGER_NULL_3BUFF,
|
|
|
|
FORTRAN_INTEGER_NULL_3BUFF,
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_3BUFF(maxloc),
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_MINLOC */
|
|
|
|
{
|
|
|
|
C_INTEGER_NULL_3BUFF,
|
|
|
|
FORTRAN_INTEGER_NULL_3BUFF,
|
|
|
|
FLOATING_POINT_NULL_3BUFF,
|
|
|
|
LOGICAL_NULL_3BUFF,
|
|
|
|
COMPLEX_NULL_3BUFF,
|
|
|
|
BYTE_NULL_3BUFF,
|
|
|
|
TWOLOC_3BUFF(minloc),
|
|
|
|
},
|
|
|
|
/* Corresponds to MPI_REPLACE */
|
|
|
|
{
|
|
|
|
/* MPI_ACCUMULATE is handled differently than the other
|
|
|
|
reductions, so just zero out its function
|
|
|
|
impementations here to ensure that users don't invoke
|
|
|
|
MPI_REPLACE with any reduction operations other than
|
|
|
|
ACCUMULATE */
|
|
|
|
NULL,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|