2007-12-21 09:02:00 +03:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; -*- */
|
2004-04-21 02:37:46 +04:00
|
|
|
/*
|
2006-03-04 21:35:33 +03:00
|
|
|
* Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
|
2005-11-05 22:57:48 +03:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2007-12-21 09:02:00 +03:00
|
|
|
* Copyright (c) 2004-2007 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2007-06-19 09:03:11 +04:00
|
|
|
* Copyright (c) 2004-2007 High Performance Computing Center Stuttgart,
|
2004-11-28 23:09:25 +03:00
|
|
|
* University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
* Copyright (c) 2007-2009 Cisco Systems, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
2004-04-21 02:37:46 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2004-06-07 19:33:53 +04:00
|
|
|
#include "ompi_config.h"
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2006-02-12 04:33:29 +03:00
|
|
|
#include "ompi/constants.h"
|
|
|
|
#include "ompi/op/op.h"
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
#include "ompi/mca/op/base/base.h"
|
2007-12-21 09:02:00 +03:00
|
|
|
#include "opal/class/opal_pointer_array.h"
|
2006-02-12 04:33:29 +03:00
|
|
|
#include "ompi/datatype/datatype_internal.h"
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Table for Fortran <-> C op handle conversion
|
|
|
|
*/
|
2007-12-21 09:02:00 +03:00
|
|
|
opal_pointer_array_t *ompi_op_f_to_c_table;
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create intrinsic op
|
|
|
|
*/
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
static int add_intrinsic(ompi_op_t *op, int fort_handle, int flags,
|
|
|
|
const char *name);
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Class information
|
|
|
|
*/
|
2004-06-07 19:33:53 +04:00
|
|
|
static void ompi_op_construct(ompi_op_t *eh);
|
|
|
|
static void ompi_op_destruct(ompi_op_t *eh);
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Class instance
|
|
|
|
*/
|
2005-07-03 20:06:07 +04:00
|
|
|
OBJ_CLASS_INSTANCE(ompi_op_t, opal_object_t,
|
2004-06-29 04:02:25 +04:00
|
|
|
ompi_op_construct, ompi_op_destruct);
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
* Intrinsic MPI_Op objects
|
2004-06-29 04:02:25 +04:00
|
|
|
*/
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
ompi_op_t ompi_mpi_op_null;
|
|
|
|
ompi_op_t ompi_mpi_op_max;
|
|
|
|
ompi_op_t ompi_mpi_op_min;
|
|
|
|
ompi_op_t ompi_mpi_op_sum;
|
|
|
|
ompi_op_t ompi_mpi_op_prod;
|
|
|
|
ompi_op_t ompi_mpi_op_land;
|
|
|
|
ompi_op_t ompi_mpi_op_band;
|
|
|
|
ompi_op_t ompi_mpi_op_lor;
|
|
|
|
ompi_op_t ompi_mpi_op_bor;
|
|
|
|
ompi_op_t ompi_mpi_op_lxor;
|
|
|
|
ompi_op_t ompi_mpi_op_bxor;
|
|
|
|
ompi_op_t ompi_mpi_op_maxloc;
|
|
|
|
ompi_op_t ompi_mpi_op_minloc;
|
|
|
|
ompi_op_t ompi_mpi_op_replace;
|
2004-04-21 02:37:46 +04:00
|
|
|
|
|
|
|
/*
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
* Map from ddt->id to position in op function pointer array
|
2004-04-21 02:37:46 +04:00
|
|
|
*/
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
int ompi_op_ddt_map[DT_MAX_PREDEFINED];
|
|
|
|
|
|
|
|
|
2005-09-08 13:47:27 +04:00
|
|
|
#define FLAGS_NO_FLOAT \
|
|
|
|
(OMPI_OP_FLAGS_INTRINSIC | OMPI_OP_FLAGS_ASSOC | OMPI_OP_FLAGS_COMMUTE)
|
|
|
|
#define FLAGS \
|
|
|
|
(OMPI_OP_FLAGS_INTRINSIC | OMPI_OP_FLAGS_ASSOC | \
|
|
|
|
OMPI_OP_FLAGS_FLOAT_ASSOC | OMPI_OP_FLAGS_COMMUTE)
|
|
|
|
|
2004-04-21 02:37:46 +04:00
|
|
|
/*
|
2004-06-07 19:33:53 +04:00
|
|
|
* Initialize OMPI op infrastructure
|
2004-04-21 02:37:46 +04:00
|
|
|
*/
|
2004-06-07 19:33:53 +04:00
|
|
|
int ompi_op_init(void)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
int i;
|
2004-06-29 04:02:25 +04:00
|
|
|
|
2004-06-07 19:33:53 +04:00
|
|
|
/* initialize ompi_op_f_to_c_table */
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2007-12-21 09:02:00 +03:00
|
|
|
ompi_op_f_to_c_table = OBJ_NEW(opal_pointer_array_t);
|
2007-06-19 03:03:56 +04:00
|
|
|
if (NULL == ompi_op_f_to_c_table){
|
|
|
|
return OMPI_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill in the ddt.id->op_position map */
|
|
|
|
|
|
|
|
for (i = 0; i < DT_MAX_PREDEFINED; ++i) {
|
|
|
|
ompi_op_ddt_map[i] = -1;
|
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
ompi_op_ddt_map[DT_UNSIGNED_CHAR] = OMPI_OP_BASE_TYPE_UNSIGNED_CHAR;
|
|
|
|
ompi_op_ddt_map[DT_SIGNED_CHAR] = OMPI_OP_BASE_TYPE_SIGNED_CHAR;
|
|
|
|
ompi_op_ddt_map[DT_BYTE] = OMPI_OP_BASE_TYPE_BYTE;
|
|
|
|
ompi_op_ddt_map[DT_SHORT] = OMPI_OP_BASE_TYPE_SHORT;
|
|
|
|
ompi_op_ddt_map[DT_UNSIGNED_SHORT] = OMPI_OP_BASE_TYPE_UNSIGNED_SHORT;
|
|
|
|
ompi_op_ddt_map[DT_INT] = OMPI_OP_BASE_TYPE_INT;
|
|
|
|
ompi_op_ddt_map[DT_UNSIGNED_INT] = OMPI_OP_BASE_TYPE_UNSIGNED;
|
|
|
|
ompi_op_ddt_map[DT_LONG] = OMPI_OP_BASE_TYPE_LONG;
|
|
|
|
ompi_op_ddt_map[DT_UNSIGNED_LONG] = OMPI_OP_BASE_TYPE_UNSIGNED_LONG;
|
|
|
|
ompi_op_ddt_map[DT_LONG_LONG_INT] = OMPI_OP_BASE_TYPE_LONG_LONG_INT;
|
|
|
|
ompi_op_ddt_map[DT_UNSIGNED_LONG_LONG] = OMPI_OP_BASE_TYPE_UNSIGNED_LONG_LONG;
|
|
|
|
ompi_op_ddt_map[DT_FLOAT] = OMPI_OP_BASE_TYPE_FLOAT;
|
|
|
|
ompi_op_ddt_map[DT_DOUBLE] = OMPI_OP_BASE_TYPE_DOUBLE;
|
|
|
|
ompi_op_ddt_map[DT_LONG_DOUBLE] = OMPI_OP_BASE_TYPE_LONG_DOUBLE;
|
|
|
|
ompi_op_ddt_map[DT_COMPLEX_FLOAT] = OMPI_OP_BASE_TYPE_COMPLEX;
|
|
|
|
ompi_op_ddt_map[DT_COMPLEX_DOUBLE] = OMPI_OP_BASE_TYPE_DOUBLE_COMPLEX;
|
|
|
|
ompi_op_ddt_map[DT_LOGIC] = OMPI_OP_BASE_TYPE_LOGICAL;
|
|
|
|
ompi_op_ddt_map[DT_CXX_BOOL] = OMPI_OP_BASE_TYPE_BOOL;
|
|
|
|
ompi_op_ddt_map[DT_FLOAT_INT] = OMPI_OP_BASE_TYPE_FLOAT_INT;
|
|
|
|
ompi_op_ddt_map[DT_DOUBLE_INT] = OMPI_OP_BASE_TYPE_DOUBLE_INT;
|
|
|
|
ompi_op_ddt_map[DT_LONG_INT] = OMPI_OP_BASE_TYPE_LONG_INT;
|
|
|
|
ompi_op_ddt_map[DT_2INT] = OMPI_OP_BASE_TYPE_2INT;
|
|
|
|
ompi_op_ddt_map[DT_SHORT_INT] = OMPI_OP_BASE_TYPE_SHORT_INT;
|
|
|
|
ompi_op_ddt_map[DT_INTEGER] = OMPI_OP_BASE_TYPE_INTEGER;
|
|
|
|
ompi_op_ddt_map[DT_REAL] = OMPI_OP_BASE_TYPE_REAL;
|
|
|
|
ompi_op_ddt_map[DT_DBLPREC] = OMPI_OP_BASE_TYPE_DOUBLE_PRECISION;
|
|
|
|
ompi_op_ddt_map[DT_2REAL] = OMPI_OP_BASE_TYPE_2REAL;
|
|
|
|
ompi_op_ddt_map[DT_2DBLPREC] = OMPI_OP_BASE_TYPE_2DOUBLE_PRECISION;
|
|
|
|
ompi_op_ddt_map[DT_2INTEGER] = OMPI_OP_BASE_TYPE_2INTEGER;
|
|
|
|
ompi_op_ddt_map[DT_LONG_DOUBLE_INT] = OMPI_OP_BASE_TYPE_LONG_DOUBLE_INT;
|
|
|
|
ompi_op_ddt_map[DT_WCHAR] = OMPI_OP_BASE_TYPE_WCHAR;
|
2007-06-19 03:03:56 +04:00
|
|
|
|
|
|
|
/* Create the intrinsic ops */
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
if (OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_null, OMPI_OP_BASE_FORTRAN_NULL,
|
|
|
|
FLAGS, "MPI_OP_NULL") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_max, OMPI_OP_BASE_FORTRAN_MAX,
|
|
|
|
FLAGS, "MPI_OP_MAX") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_min, OMPI_OP_BASE_FORTRAN_MIN,
|
|
|
|
FLAGS, "MPI_OP_MIN") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_sum, OMPI_OP_BASE_FORTRAN_SUM,
|
|
|
|
FLAGS_NO_FLOAT, "MPI_OP_SUM") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_prod, OMPI_OP_BASE_FORTRAN_PROD,
|
|
|
|
FLAGS_NO_FLOAT, "MPI_OP_PROD") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_land, OMPI_OP_BASE_FORTRAN_LAND,
|
|
|
|
FLAGS, "MPI_OP_LAND") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_band, OMPI_OP_BASE_FORTRAN_BAND,
|
|
|
|
FLAGS, "MPI_OP_BAND") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_lor, OMPI_OP_BASE_FORTRAN_LOR,
|
|
|
|
FLAGS, "MPI_OP_LOR") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_bor, OMPI_OP_BASE_FORTRAN_BOR,
|
|
|
|
FLAGS, "MPI_OP_BOR") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_lxor, OMPI_OP_BASE_FORTRAN_LXOR,
|
|
|
|
FLAGS, "MPI_OP_LXOR") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_bxor, OMPI_OP_BASE_FORTRAN_BXOR,
|
|
|
|
FLAGS, "MPI_OP_BXOR") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_maxloc, OMPI_OP_BASE_FORTRAN_MAXLOC,
|
|
|
|
FLAGS, "MPI_OP_MAXLOC") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_minloc, OMPI_OP_BASE_FORTRAN_MINLOC,
|
|
|
|
FLAGS, "MPI_OP_MINLOC") ||
|
|
|
|
OMPI_SUCCESS !=
|
|
|
|
add_intrinsic(&ompi_mpi_op_replace, OMPI_OP_BASE_FORTRAN_REPLACE,
|
|
|
|
FLAGS, "MPI_OP_REPLACE")) {
|
2007-06-19 03:03:56 +04:00
|
|
|
return OMPI_ERROR;
|
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2007-06-19 03:03:56 +04:00
|
|
|
/* All done */
|
|
|
|
return OMPI_SUCCESS;
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Clean up the op resources
|
|
|
|
*/
|
2004-06-07 19:33:53 +04:00
|
|
|
int ompi_op_finalize(void)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
/* clean up the intrinsic ops */
|
|
|
|
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_minloc);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_maxloc);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_bxor);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_lxor);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_bor);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_lor);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_band);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_land);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_prod);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_sum);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_min);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_max);
|
|
|
|
OBJ_DESTRUCT(&ompi_mpi_op_null);
|
|
|
|
|
|
|
|
/* Remove op F2C table */
|
|
|
|
|
|
|
|
OBJ_RELEASE(ompi_op_f_to_c_table);
|
|
|
|
|
|
|
|
/* All done */
|
|
|
|
|
|
|
|
return OMPI_SUCCESS;
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2004-06-29 04:02:25 +04:00
|
|
|
/*
|
|
|
|
* Create a new MPI_Op
|
|
|
|
*/
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
ompi_op_t *ompi_op_create_user(bool commute,
|
|
|
|
ompi_op_fortran_handler_fn_t func)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
ompi_op_t *new_op;
|
|
|
|
|
|
|
|
/* Create a new object and ensure that it's valid */
|
|
|
|
new_op = OBJ_NEW(ompi_op_t);
|
2008-12-31 17:50:54 +03:00
|
|
|
if (NULL == new_op) {
|
2007-06-19 03:03:56 +04:00
|
|
|
goto error;
|
2008-12-31 17:50:54 +03:00
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2004-06-07 19:33:53 +04:00
|
|
|
if (OMPI_ERROR == new_op->o_f_to_c_index) {
|
2007-06-19 03:03:56 +04:00
|
|
|
OBJ_RELEASE(new_op);
|
|
|
|
new_op = NULL;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The new object is valid -- initialize it. If this is being
|
|
|
|
* created from fortran, the fortran MPI API wrapper function
|
|
|
|
* will override the o_flags field directly. We cast the
|
|
|
|
* function pointer type to the fortran type arbitrarily -- it
|
|
|
|
* only has to be a function pointer in order to store properly,
|
|
|
|
* it doesn't matter what type it is (we'll cast it to the Right
|
|
|
|
* type when we *use* it).
|
|
|
|
*/
|
|
|
|
new_op->o_flags = OMPI_OP_FLAGS_ASSOC;
|
|
|
|
if (commute) {
|
2004-06-29 04:02:25 +04:00
|
|
|
new_op->o_flags |= OMPI_OP_FLAGS_COMMUTE;
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
/* Set the user-defined callback function. The "fort_fn" member
|
|
|
|
is part of a union, so it doesn't matter if this is a C or
|
|
|
|
Fortan callback; we'll call the right flavor (per o_flags) at
|
|
|
|
invocation time. */
|
|
|
|
new_op->o_func.fort_fn = func;
|
2004-04-21 03:11:11 +04:00
|
|
|
|
2007-06-19 03:03:56 +04:00
|
|
|
error:
|
|
|
|
/* All done */
|
|
|
|
return new_op;
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
/*
|
|
|
|
* See lengthy comment in mpi/cxx/intercepts.cc for how the C++ MPI::Op
|
|
|
|
* callbacks work.
|
|
|
|
*/
|
2005-12-23 19:49:09 +03:00
|
|
|
void ompi_op_set_cxx_callback(ompi_op_t *op, MPI_User_function *fn)
|
|
|
|
{
|
|
|
|
op->o_flags |= OMPI_OP_FLAGS_CXX_FUNC;
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
/* The OMPI C++ intercept was previously stored in
|
|
|
|
op->o_func.fort_fn by ompi_op_create_user(). So save that in
|
|
|
|
cxx.intercept_fn and put the user's fn in cxx.user_fn. */
|
|
|
|
op->o_func.cxx_data.intercept_fn =
|
|
|
|
(ompi_op_cxx_handler_fn_t *) op->o_func.fort_fn;
|
|
|
|
op->o_func.cxx_data.user_fn = fn;
|
2005-12-23 19:49:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2004-04-21 02:37:46 +04:00
|
|
|
/**************************************************************************
|
|
|
|
*
|
|
|
|
* Static functions
|
|
|
|
*
|
|
|
|
**************************************************************************/
|
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
static int add_intrinsic(ompi_op_t *op, int fort_handle, int flags,
|
|
|
|
const char *name)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
/* Add the op to the table */
|
|
|
|
OBJ_CONSTRUCT(op, ompi_op_t);
|
|
|
|
if (op->o_f_to_c_index != fort_handle) {
|
|
|
|
return OMPI_ERROR;
|
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
|
Two major things in this commit:
* New "op" MPI layer framework
* Addition of the MPI_REDUCE_LOCAL proposed function (for MPI-2.2)
= Op framework =
Add new "op" framework in the ompi layer. This framework replaces the
hard-coded MPI_Op back-end functions for (MPI_Op, MPI_Datatype) tuples
for pre-defined MPI_Ops, allowing components and modules to provide
the back-end functions. The intent is that components can be written
to take advantage of hardware acceleration (GPU, FPGA, specialized CPU
instructions, etc.). Similar to other frameworks, components are
intended to be able to discover at run-time if they can be used, and
if so, elect themselves to be selected (or disqualify themselves from
selection if they cannot run). If specialized hardware is not
available, there is a default set of functions that will automatically
be used.
This framework is ''not'' used for user-defined MPI_Ops.
The new op framework is similar to the existing coll framework, in
that the final set of function pointers that are used on any given
intrinsic MPI_Op can be a mixed bag of function pointers, potentially
coming from multiple different op modules. This allows for hardware
that only supports some of the operations, not all of them (e.g., a
GPU that only supports single-precision operations).
All the hard-coded back-end MPI_Op functions for (MPI_Op,
MPI_Datatype) tuples still exist, but unlike coll, they're in the
framework base (vs. being in a separate "basic" component) and are
automatically used if no component is found at runtime that provides a
module with the necessary function pointers.
There is an "example" op component that will hopefully be useful to
those writing meaningful op components. It is currently
.ompi_ignore'd so that it doesn't impinge on other developers (it's
somewhat chatty in terms of opal_output() so that you can tell when
its functions have been invoked). See the README file in the example
op component directory. Developers of new op components are
encouraged to look at the following wiki pages:
https://svn.open-mpi.org/trac/ompi/wiki/devel/Autogen
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateFramework
= MPI_REDUCE_LOCAL =
Part of the MPI-2.2 proposal listed here:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/24
is to add a new function named MPI_REDUCE_LOCAL. It is very easy to
implement, so I added it (also because it makes testing the op
framework pretty easy -- you can do it in serial rather than via
parallel reductions). There's even a man page!
This commit was SVN r20280.
2009-01-15 02:44:31 +03:00
|
|
|
/* Set the members */
|
|
|
|
op->o_flags = flags;
|
|
|
|
strncpy(op->o_name, name, sizeof(op->o_name) - 1);
|
|
|
|
op->o_name[sizeof(op->o_name) - 1] = '\0';
|
|
|
|
|
|
|
|
/* Perform the selection on this op to fill in the function
|
|
|
|
pointers (except for NULL and REPLACE, which don't get
|
|
|
|
components) */
|
|
|
|
if (OMPI_OP_BASE_FORTRAN_NULL != op->o_f_to_c_index &&
|
|
|
|
OMPI_OP_BASE_FORTRAN_REPLACE != op->o_f_to_c_index) {
|
|
|
|
return ompi_op_base_op_select(op);
|
|
|
|
} else {
|
|
|
|
return OMPI_SUCCESS;
|
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Op constructor
|
|
|
|
*/
|
2004-06-07 19:33:53 +04:00
|
|
|
static void ompi_op_construct(ompi_op_t *new_op)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
int ret_val;
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2007-06-19 03:03:56 +04:00
|
|
|
/* assign entry in fortran <-> c translation array */
|
2004-04-21 02:37:46 +04:00
|
|
|
|
2007-12-21 09:02:00 +03:00
|
|
|
ret_val = opal_pointer_array_add(ompi_op_f_to_c_table, new_op);
|
2007-06-19 03:03:56 +04:00
|
|
|
new_op->o_f_to_c_index = ret_val;
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Op destructor
|
|
|
|
*/
|
2004-06-07 19:33:53 +04:00
|
|
|
static void ompi_op_destruct(ompi_op_t *op)
|
2004-04-21 02:37:46 +04:00
|
|
|
{
|
2007-06-19 03:03:56 +04:00
|
|
|
/* reset the ompi_op_f_to_c_table entry - make sure that the
|
|
|
|
entry is in the table */
|
|
|
|
|
2007-12-21 09:02:00 +03:00
|
|
|
if (NULL != opal_pointer_array_get_item(ompi_op_f_to_c_table,
|
2007-06-19 03:03:56 +04:00
|
|
|
op->o_f_to_c_index)) {
|
2007-12-21 09:02:00 +03:00
|
|
|
opal_pointer_array_set_item(ompi_op_f_to_c_table,
|
2007-06-19 03:03:56 +04:00
|
|
|
op->o_f_to_c_index, NULL);
|
|
|
|
}
|
2004-04-21 02:37:46 +04:00
|
|
|
}
|