1
1

ompi: remove obsolete c++ bindings

This commit contains the following changes:

The C++ bindings were removed from the standard in MPI-3.0. This
commit removes the entirety of the C++ bindings as well as the
support configury.

Removes all references to C++ from the man pages. This includes the
bindings themselves, all references to what C++ bindings return,
all not-available comments, and differences between C++ and other
language bindings.

If the user passes --enable-mpi-cxx, --enable-mpi-cxx-seek, or
--enable-cxx-exceptions, print a warning message an abort configure.

Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
Signed-off-by: Nathan Hjelm <hjelmn@google.com>
Этот коммит содержится в:
Nathan Hjelm 2020-02-18 22:42:21 -08:00
родитель f496f256cd
Коммит 0b8baa217d
360 изменённых файлов: 682 добавлений и 12274 удалений

Просмотреть файл

@ -10,9 +10,6 @@
#
TRIM_OPTIONS=
if ! MAN_PAGE_BUILD_MPI_CXX_BINDINGS
TRIM_OPTIONS += --nocxx
endif
if ! MAN_PAGE_BUILD_MPIFH_BINDINGS
TRIM_OPTIONS += --nofortran
endif

31
README
Просмотреть файл

@ -347,11 +347,6 @@ Compiler Notes
version of the Intel 12.1 Linux compiler suite, the problem will go
away.
- Early versions of the Portland Group 6.0 compiler have problems
creating the C++ MPI bindings as a shared library (e.g., v6.0-1).
Tests with later versions show that this has been fixed (e.g.,
v6.0-5).
- The Portland Group compilers prior to version 7.0 require the
"-Msignextend" compiler flag to extend the sign bit when converting
from a shorter to longer integer. This is is different than other
@ -370,24 +365,6 @@ Compiler Notes
- It has been reported that Pathscale 5.0.5 and 6.0.527 compilers
give an internal compiler error when trying to Open MPI.
- Using the MPI C++ bindings with older versions of the Pathscale
compiler on some platforms is an old issue that seems to be a
problem when Pathscale uses a back-end GCC 3.x compiler. Here's a
proposed solution from the Pathscale support team (from July 2010):
The proposed work-around is to install gcc-4.x on the system and
use the pathCC -gnu4 option. Newer versions of the compiler (4.x
and beyond) should have this fixed, but we'll have to test to
confirm it's actually fixed and working correctly.
We don't anticipate that this will be much of a problem for Open MPI
users these days (our informal testing shows that not many users are
still using GCC 3.x). Contact Pathscale support if you continue to
have problems with Open MPI's C++ bindings.
Note the MPI C++ bindings have been deprecated by the MPI Forum and
may not be supported in future releases.
- As of July 2017, the Pathscale compiler suite apparently has no
further commercial support, and it does not look like there will be
further releases. Any issues discovered regarding building /
@ -1340,12 +1317,6 @@ MPI FUNCTIONALITY
Disable the MPI thread level MPI_THREAD_MULTIPLE (it is enabled by
default).
--enable-mpi-cxx
Enable building the C++ MPI bindings (default: disabled).
The MPI C++ bindings were deprecated in MPI-2.2, and removed from
the MPI standard in MPI-3.0.
--enable-mpi-java
Enable building of an EXPERIMENTAL Java MPI interface (disabled by
default). You may also need to specify --with-jdk-dir,
@ -1914,7 +1885,7 @@ each different wrapper compiler (language):
ompi Synonym for "ompi-c"; Open MPI applications using the C
MPI bindings
ompi-c Open MPI applications using the C MPI bindings
ompi-cxx Open MPI applications using the C or C++ MPI bindings
ompi-cxx Open MPI applications using the C MPI bindings
ompi-fort Open MPI applications using the Fortran MPI bindings
------------------------------------------------------------------------

Просмотреть файл

@ -70,12 +70,7 @@ date="Unreleased developer copy"
# functions over time; these technically did not change the interface
# because Fortran 77 does not link by parameter type.
# 4. Similar to libmpi, libmpi_cxx's version number refers to the
# public MPI interfaces. Note that this version number may or may not
# be affected by changes to inlined functions in OMPI's
# header-file-based C++ bindings implementation.
# 5. The ORTE and OPAL libraries will change versions when their
# 4. The ORTE and OPAL libraries will change versions when their
# public interfaces change (as relative to the layer(s) above them).
# None of the ORTE and OPAL interfaces are public to MPI applications,
# but they are "public" within the OMPI code base and select 3rd party
@ -85,7 +80,6 @@ date="Unreleased developer copy"
# format.
libmpi_so_version=0:0:0
libmpi_cxx_so_version=0:0:0
libmpi_mpifh_so_version=0:0:0
libmpi_usempi_tkr_so_version=0:0:0
libmpi_usempi_ignore_tkr_so_version=0:0:0

Просмотреть файл

@ -17,7 +17,6 @@ my $package_name;
my $package_version;
my $ompi_date;
my $opal_date;
my $cxx = '1';
my $fortran = '1';
my $f08 = '1';
my $input;
@ -29,7 +28,6 @@ my $ok = Getopt::Long::GetOptions("package-name=s" => \$package_name,
"package-version=s" => \$package_version,
"ompi-date=s" => \$ompi_date,
"opal-date=s" => \$opal_date,
"cxx!" => \$cxx,
"fortran!" => \$fortran,
"f08!" => \$f08,
"input=s" => \$input,
@ -58,10 +56,6 @@ $file =~ s/#PACKAGE_VERSION#/$package_version/g;
$file =~ s/#OMPI_DATE#/$ompi_date/g;
$file =~ s/#OPAL_DATE#/$opal_date/g;
if ($cxx == 0) {
$file =~ s/\n\.SH C\+\+ Syntax.+?\n\.SH/\n\.SH/s;
}
if ($fortran == 0) {
$file =~ s/\n\.SH Fortran Syntax.+?\n\.SH/\n\.SH/s;
}

Просмотреть файл

@ -1,6 +1,6 @@
# -*- shell-script -*-
#
# Copyright (c) 2009-2019 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2009-2020 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2017-2018 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2018 Los Alamos National Security, LLC. All rights
@ -26,7 +26,6 @@ AC_DEFUN([OMPI_CONFIG_FILES],[
ompi/mpi/c/Makefile
ompi/mpi/c/profile/Makefile
ompi/mpi/cxx/Makefile
ompi/mpi/fortran/base/Makefile
ompi/mpi/fortran/mpif-h/Makefile
ompi/mpi/fortran/mpif-h/profile/Makefile

Просмотреть файл

@ -166,36 +166,6 @@ case "x$enable_mpi_fortran" in
;;
esac
#
# C++
#
AC_MSG_CHECKING([if want C++ bindings])
AC_ARG_ENABLE(mpi-cxx,
AC_HELP_STRING([--enable-mpi-cxx],
[enable C++ MPI bindings (default: disabled)]))
if test "$enable_mpi_cxx" = "yes"; then
AC_MSG_RESULT([yes])
WANT_MPI_CXX_SUPPORT=1
else
AC_MSG_RESULT([no])
WANT_MPI_CXX_SUPPORT=0
fi
AC_MSG_CHECKING([if want MPI::SEEK_SET support])
AC_ARG_ENABLE([mpi-cxx-seek],
[AC_HELP_STRING([--enable-mpi-cxx-seek],
[enable support for MPI::SEEK_SET, MPI::SEEK_END, and MPI::SEEK_POS in C++ bindings (default: enabled)])])
if test "$enable_mpi_cxx_seek" != "no" ; then
AC_MSG_RESULT([yes])
OMPI_WANT_MPI_CXX_SEEK=1
else
AC_MSG_RESULT([no])
OMPI_WANT_MPI_CXX_SEEK=0
fi
AC_DEFINE_UNQUOTED([OMPI_WANT_MPI_CXX_SEEK], [$OMPI_WANT_MPI_CXX_SEEK],
[do we want to try to work around C++ bindings SEEK_* issue?])
# Remove these when we finally kill them once and for all
AC_ARG_ENABLE([mpi1-compatibility],
[AC_HELP_STRING([--enable-mpi1-compatibility],

Просмотреть файл

@ -1,94 +0,0 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2005 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
dnl
dnl $HEADER$
dnl
AC_DEFUN([OMPI_CXX_FIND_EXCEPTION_FLAGS],[
#
# Arguments: none
#
# Dependencies: none
#
# Get the exception handling flags for the C++ compiler. Leaves
# CXXFLAGS undisturbed.
# Provides --with-exflags command line argument for configure as well.
#
# Sets OMPI_CXX_EXCEPTION_CXXFLAGS and OMPI_CXX_EXCEPTION_LDFLAGS as
# appropriate.
# Must call AC_SUBST manually
#
# Command line flags
AC_ARG_WITH(exflags,
AC_HELP_STRING([--with-exflags],
[Specify flags necessary to enable C++ exceptions]),
ompi_force_exflags="$withval")
ompi_CXXFLAGS_SAVE="$CXXFLAGS"
AC_MSG_CHECKING([for compiler exception flags])
# See which flags to use
if test "$ompi_force_exflags" != ""; then
# If the user supplied flags, use those
ompi_exflags="$ompi_force_exflags"
elif test "$GXX" = "yes"; then
# g++ has changed their flags a few times. Sigh.
CXXFLAGS="$CXXFLAGS -fexceptions"
AC_LANG_SAVE
AC_LANG_CPLUSPLUS
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[try { int i = 0; } catch(...) { int j = 2; }]])], ompi_happy=1, ompi_happy=0)
if test "$ompi_happy" = "1"; then
ompi_exflags="-fexceptions";
else
CXXFLAGS="$CXXFLAGS_SAVE -fhandle-exceptions"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[try { int i = 0; } catch(...) { int j = 2; }]])], ompi_happy=1, ompi_happy=0)
if test "$ompi_happy" = "1"; then
ompi_exflags="-fhandle-exceptions";
fi
fi
AC_LANG_RESTORE
elif test "`basename $CXX`" = "KCC"; then
# KCC flags
ompi_exflags="--exceptions"
fi
CXXFLAGS="$ompi_CXXFLAGS_SAVE"
# Save the result
OMPI_CXX_EXCEPTIONS_CXXFLAGS="$ompi_exflags"
OMPI_CXX_EXCEPTIONS_LDFLAGS="$ompi_exflags"
if test "$ompi_exflags" = ""; then
AC_MSG_RESULT([none necessary])
else
AC_MSG_RESULT([$ompi_exflags])
fi
# Clean up
unset ompi_force_exflags ompi_CXXFLAGS_SAVE ompi_exflags ompi_happy])dnl

Просмотреть файл

@ -1,44 +0,0 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2005 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
dnl
dnl $HEADER$
dnl
AC_DEFUN([OMPI_CXX_FIND_TEMPLATE_PARAMETERS],[
#
# Arguments: none
#
# Dependencies: None
#
# Get the C++ compiler template parameters.
#
# Adds to CXXFLAGS
AC_MSG_CHECKING([for C++ compiler template parameters])
if test "$BASECXX" = "KCC"; then
new_flags="--one_instantiation_per_object"
CXXFLAGS="$CXXFLAGS $new_flags"
else
new_flags="none needed"
fi
AC_MSG_RESULT([$new_flags])
#
# Clean up
#
unset new_flags
])

Просмотреть файл

@ -1,172 +0,0 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2005 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
dnl
dnl $HEADER$
dnl
AC_DEFUN([OMPI_CXX_FIND_TEMPLATE_REPOSITORY],[
AC_REQUIRE([AC_PROG_GREP])
#
# Arguments: None
#
# Dependencies: None
#
# See if the compiler makes template repository directories
# Warning: this is a really screwy example! -JMS
#
# Sets OMPI_CXX_TEMPLATE_REPOSITORY to the template repository, or blank.
# Must call AC_SUBST manually
#
AC_CACHE_CHECK([for C++ template_repository_directory],
[ompi_cv_cxx_template_repository],
[_OMPI_CXX_FIND_TEMPLATE_REPOSITORY])
if test "$ompi_cv_cxx_template_repository" = "not used" ; then
OMPI_CXX_TEMPLATE_REPOSITORY=
elif test "$ompi_cv_cxx_template_repository" = "templates not supported" ; then
OMPI_CXX_TEMPLATE_REPOSITORY=
else
OMPI_CXX_TEMPLATE_REPOSITORY="$ompi_cv_cxx_template_repository"
fi
])
AC_DEFUN([_OMPI_CXX_FIND_TEMPLATE_REPOSITORY],[
# Find the repository
mkdir conf_tmp_$$
cd conf_tmp_$$
cat > conftest.h <<EOF
template <class T>
class foo {
public:
foo(T yow) : data(yow) { yow.member(3); };
void member(int i);
private:
T data;
};
class bar {
public:
bar(int i) { data = i; };
void member(int j) { data = data * j; };
private:
int data;
};
EOF
cat > conftest2.C <<EOF
#include "conftest.h"
void
some_other_function(void)
{
foo<bar> var1(6);
foo< foo<bar> > var2(var1);
}
EOF
cat > conftest1.C <<EOF
#include "conftest.h"
void some_other_function(void);
template <class T>
void
foo<T>::member(int i)
{
i += 2;
}
int
main()
{
foo<bar> var1(6);
foo< foo<bar> > var2(var1);
some_other_function();
return 0;
}
EOF
ompi_template_failed=
echo configure:__oline__: $CXX $CXXFLAGS -c conftest1.C >&5
$CXX $CXXFLAGS -c conftest1.C >&5 2>&5
if test ! -f conftest1.o ; then
ompi_cv_cxx_template_repository="templates not supported"
echo configure:__oline__: here is the program that failed: >&5
cat conftest1.C >&5
echo configure:__oline__: here is conftest.h: >&5
cat conftest.h >&5
ompi_template_failed=1
else
echo configure:__oline__: $CXX $CXXFLAGS -c conftest2.C >&5
$CXX $CXXFLAGS -c conftest2.C >&5 2>&5
if test ! -f conftest2.o ; then
ompi_cv_cxx_template_repository=
echo configure:__oline__: here is the program that failed: >&5
cat conftest2.C >&5
echo configure:__oline__: here is conftest.h: >&5
cat conftest.h >&5
else
rm -rf conftest*
for ompi_file in `ls`
do
if test "$ompi_file" != "." && test "$ompi_file" != ".."; then
# Is it a directory?
if test -d "$ompi_file"; then
ompi_template_dir="$ompi_file $ompi_template_dir"
# Or is it a file?
else
name="`echo $ompi_file | cut -d. -f1`"
temp_mask=
if test "$name" = "main" || test "$name" = "other"; then
temp_mask="`echo $ompi_file | cut -d. -f2`"
if test "$ompi_template_filemask" = ""; then
ompi_template_filemask="$temp_mask";
elif test "`echo $ompi_template_filemask | $GREP $temp_mask`" = ""; then
ompi_template_filemask="$ompi_template_filemask $temp_mask"
fi
fi
fi
fi
done
if test "$ompi_template_filemask" != ""; then
temp_mask=
for mask in $ompi_template_filemask
do
temp_mask="*.$mask $temp_mask"
done
ompi_template_filemask=$temp_mask
fi
fi
fi
ompi_cv_cxx_template_repository="$ompi_template_dir $ompi_template_filemask"
if test "`echo $ompi_cv_cxx_template_repository`" = ""; then
ompi_cv_cxx_template_repository="not used"
fi
cd ..
rm -rf conf_tmp_$$
# Clean up
unset ompi_file ompi_template_failed ompi_template_dir])

Просмотреть файл

@ -1,44 +0,0 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2005 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
dnl
dnl $HEADER$
dnl
AC_DEFUN([OMPI_CXX_HAVE_EXCEPTIONS],[
#
# Arguments: None
#
# Dependencies: None
#
# Check to see if the C++ compiler can handle exceptions
#
# Sets OMPI_CXX_EXCEPTIONS to 1 if compiler has exceptions, 0 if not
#
AC_MSG_CHECKING([for throw/catch])
AC_LANG_SAVE
AC_LANG_CPLUSPLUS
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[int i=1; throw(i);]])],
OMPI_CXX_EXCEPTIONS=1, OMPI_CXX_EXCPTIONS=0)
if test "$OMPI_CXX_EXCEPTIONS" = "1"; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
fi
# Clean up
AC_LANG_RESTORE])dnl

Просмотреть файл

@ -1,6 +1,7 @@
# -*- shell-script -*-
#
# Copyright (c) 2020 Intel, Inc. All rights reserved.
# Copyright (c) 2020 Cisco Systems, Inc. All rights reserved
# $COPYRIGHT$
#
# Additional copyrights may follow
@ -34,5 +35,27 @@ AC_DEFUN([OMPI_CHECK_DELETED_OPTIONS],[
AC_MSG_ERROR([Build cannot continue.])
fi
# Open MPI C++ bindings were removed in v5.0
cxx=0
AC_ARG_ENABLE([mpi-cxx],
[AC_HELP_STRING([--enable-mpi-cxx],
[*DELETED* Build the MPI C++ bindings])],
[cxx=1])
AC_ARG_ENABLE([mpi-cxx-seek],
[AC_HELP_STRING([--enable-mpi-cxx-seek],
[*DELETED* Build support for MPI::SEEK])],
[cxx=1])
AC_ARG_ENABLE([cxx-exceptions],
[AC_HELP_STRING([--enable-cxx-exceptions],
[*DELETED* Build support for C++ exceptions in the MPI C++ bindings])],
[cxx=1])
AS_IF([test $cxx -eq 1],
[AC_MSG_WARN([The MPI C++ bindings have been removed from Open MPI.])
AC_MSG_WARN([If you need support for the MPI C++ bindings, you])
AC_MSG_WARN([will need to use an older version of Open MPI.])
AC_MSG_ERROR([Build cannot continue.])
])
OPAL_VAR_SCOPE_POP
])

Просмотреть файл

@ -13,7 +13,7 @@ dnl All rights reserved.
dnl Copyright (c) 2006 Los Alamos National Security, LLC. All rights
dnl reserved.
dnl Copyright (c) 2007-2009 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2008-2013 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2008-2020 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2015-2016 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$
@ -23,13 +23,11 @@ dnl
dnl $HEADER$
dnl
# This macro is necessary to get the title to be displayed first. :-)
dnl This macro is necessary to get the title to be displayed first. :-)
AC_DEFUN([OMPI_SETUP_CXX_BANNER],[
opal_show_subtitle "C++ compiler and preprocessor"
])
# This macro is necessary because PROG_CXX* is REQUIREd by multiple
# places in SETUP_CXX.
AC_DEFUN([OMPI_PROG_CXX],[
OPAL_VAR_SCOPE_PUSH([ompi_cxxflags_save])
ompi_cxxflags_save="$CXXFLAGS"
@ -39,53 +37,23 @@ AC_DEFUN([OMPI_PROG_CXX],[
OPAL_VAR_SCOPE_POP
])
# OMPI_SETUP_CXX()
# ----------------
# Do everything required to setup the C++ compiler. Safe to AC_REQUIRE
# this macro.
dnl OMPI_SETUP_CXX()
dnl ----------------
dnl Do everything required to setup the C++ compiler for the mpic++
dnl wrapper compiler (there is no C++ code in Open MPI, so we do not
dnl need to setup for internal C++ compilations). Safe to AC_REQUIRE
dnl this macro.
AC_DEFUN([OMPI_SETUP_CXX],[
OPAL_VAR_SCOPE_PUSH([ompi_cxx_argv0])
# Do a little tomfoolery to get the subsection title printed first
AC_REQUIRE([OMPI_SETUP_CXX_BANNER])
_OMPI_SETUP_CXX_COMPILER
_OMPI_CXX_CHECK_EXCEPTIONS
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "1"],
[OMPI_CXX_FIND_TEMPLATE_REPOSITORY
OMPI_CXX_FIND_TEMPLATE_PARAMETERS
OPAL_CHECK_IDENT([CXX], [CXXFLAGS], [cc], [C++])])
_OMPI_CXX_CHECK_BUILTIN
_OMPI_CXX_CHECK_2D_CONST_CAST
AM_CONDITIONAL(OMPI_BUILD_MPI_CXX_BINDINGS, [test "$WANT_MPI_CXX_SUPPORT" = 1])
AC_DEFINE_UNQUOTED(OMPI_BUILD_CXX_BINDINGS, $WANT_MPI_CXX_SUPPORT,
[Whether we want MPI C++ support or not])
])
# _OMPI_SETUP_CXX_COMPILER()
# --------------------------
# Setup the CXX compiler
AC_DEFUN([_OMPI_SETUP_CXX_COMPILER],[
OPAL_VAR_SCOPE_PUSH(ompi_cxx_compiler_works)
# There's a few cases here:
#
# 1. --enable-mpi-cxx was supplied: error if we don't find a C++
# compiler
# 2. --disable-mpi-cxx was supplied: check for a C++ compiler anyway
# (so we can have a functional mpic++ wrapper compiler), but
# don't error if we don't find one.
# 3. neither was specified: same was #2
#
# Then only proceed to do all the rest of the C++ checks if we
# both found a c++ compiler and want the C++ bindings (i.e., either
# case #1 or #3)
# Must REQUIRE the PROG_CXX macro and not call it directly here for
# reasons well-described in the AC2.64 (and beyond) docs.
# Must REQUIRE the PROG_CXX macro and not call it directly here
# for reasons well-described in the AC2.64 (and beyond) docs --
# see the docs for AC PROG_CC for details.
AC_REQUIRE([OMPI_PROG_CXX])
BASECXX="`basename $CXX`"
AS_IF([test "x$CXX" = "x"], [CXX=none])
@ -97,145 +65,7 @@ AC_DEFUN([_OMPI_SETUP_CXX_COMPILER],[
AC_DEFINE_UNQUOTED(OMPI_CXX, "$CXX", [OMPI underlying C++ compiler])
AC_SUBST(OMPI_CXX_ABSOLUTE)
# Make sure that the C++ compiler both works and is actually a C++
# compiler (if not cross-compiling). Don't just use the AC macro
# so that we can have a pretty message. Do something here that
# should force the linking of C++-specific things (e.g., STL
# strings) so that we can force a hard check of compiling,
# linking, and running a C++ application. Note that some C
# compilers, such as at least some versions of the GNU and Intel
# compilers, will detect that the file extension is ".cc" and
# therefore switch into a pseudo-C++ personality which works for
# *compiling*, but does not work for *linking*. So in this test,
# we want to cover the entire spectrum (compiling, linking,
# running). Note that it is not a fatal error if the C++ compiler
# does not work unless the user specifically requested the C++
# bindings.
AS_IF([test "$CXX" = "none"],
[ompi_cxx_compiler_works=no],
[AS_IF([test "$ompi_cv_cxx_compiler_vendor" = "microsoft" ],
[ompi_cxx_compiler_works=yes],
[OPAL_CHECK_COMPILER_WORKS([C++], [#include <string>
],
[std::string foo = "Hello, world"],
[ompi_cxx_compiler_works=yes],
[ompi_cxx_compiler_works=no])])])
AS_IF([test "$ompi_cxx_compiler_works" = "yes"],
[_OMPI_SETUP_CXX_COMPILER_BACKEND],
[AS_IF([test "$enable_mpi_cxx" = "yes"],
[AC_MSG_WARN([Could not find functional C++ compiler, but])
AC_MSG_WARN([support for the C++ MPI bindings was requested.])
AC_MSG_ERROR([Cannot continue])],
[WANT_MPI_CXX_SUPPORT=0])])
AC_MSG_CHECKING([if able to build the MPI C++ bindings])
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "1"],
[AC_MSG_RESULT([yes])],
[AC_MSG_RESULT([no])
AS_IF([test "$enable_mpi_cxx" = "yes"],
[AC_MSG_WARN([MPI C++ binding support requested but not delivered])
AC_MSG_ERROR([Cannot continue])])])
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "1"],
[OPAL_CXX_COMPILER_VENDOR([ompi_cxx_vendor])])
OPAL_VAR_SCOPE_POP
])
# _OMPI_SETUP_CXX_COMPILER_BACKEND()
# ----------------------------------
# Back end of _OMPI_SETUP_CXX_COMPILER_BACKEND()
AC_DEFUN([_OMPI_SETUP_CXX_COMPILER_BACKEND],[
# Do we want code coverage
if test "$WANT_COVERAGE" = "1" && test "$WANT_MPI_CXX_SUPPORT" = "1"; then
if test "$ompi_cxx_vendor" = "gnu" ; then
AC_MSG_WARN([$OMPI_COVERAGE_FLAGS has been added to CFLAGS (--enable-coverage)])
WANT_DEBUG=1
CXXFLAGS="${CXXFLAGS} $OMPI_COVERAGE_FLAGS"
OPAL_WRAPPER_FLAGS_ADD([CXXFLAGS], [$OMPI_COVERAGE_FLAGS])
else
AC_MSG_WARN([Code coverage functionality is currently available only with GCC suite])
AC_MSG_ERROR([Configure: cannot continue])
fi
fi
# Do we want debugging?
if test "$WANT_DEBUG" = "1" && test "$enable_debug_symbols" != "no" ; then
CXXFLAGS="$CXXFLAGS -g"
OPAL_FLAGS_UNIQ(CXXFLAGS)
AC_MSG_WARN([-g has been added to CXXFLAGS (--enable-debug)])
fi
# These flags are generally g++-specific; even the g++-impersonating
# compilers won't accept them.
OMPI_CXXFLAGS_BEFORE_PICKY="$CXXFLAGS"
if test "$WANT_PICKY_COMPILER" = 1 && test "$ompi_cxx_vendor" = "gnu"; then
add="-Wall -Wundef -Wno-long-long"
# see if -Wno-long-double works...
AC_LANG_PUSH(C++)
CXXFLAGS_orig="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS $add -Wno-long-double -fstrict-prototype"
AC_CACHE_CHECK([if $CXX supports -Wno-long-double],
[ompi_cv_cxx_wno_long_double],
[AC_TRY_COMPILE([], [],
[dnl Alright, the -Wno-long-double did not produce any errors...
dnl Well well, try to extract a warning regarding unrecognized or ignored options
AC_TRY_COMPILE([], [long double test;],
[
ompi_cv_cxx_wno_long_double="yes"
if test -s conftest.err ; then
dnl Yes, it should be "ignor", in order to catch ignoring and ignore
for i in invalid ignor unrecognized ; do
$GREP -iq $i conftest.err
if test "$?" = "0" ; then
ompi_cv_cxx_wno_long_double="no",
break;
fi
done
fi
],
[ompi_cv_cxx_wno_long_double="no"])],
[ompi_cv_cxx_wno_long_double="no"])])
CXXFLAGS="$CXXFLAGS_orig"
AC_LANG_POP(C++)
if test "$ompi_cv_cxx_wno_long_double" = "yes" ; then
add="$add -Wno-long-double"
fi
CXXFLAGS="$CXXFLAGS $add"
OPAL_FLAGS_UNIQ(CXXFLAGS)
if test "$add" != "" ; then
AC_MSG_WARN([$add has been added to CXXFLAGS (--enable-picky)])
fi
unset add
fi
# See if this version of g++ allows -finline-functions
if test "$GXX" = "yes"; then
CXXFLAGS_orig="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS -finline-functions"
add=
AC_LANG_PUSH(C++)
AC_CACHE_CHECK([if $CXX supports -finline-functions],
[ompi_cv_cxx_finline_functions],
[AC_TRY_COMPILE([], [],
[ompi_cv_cxx_finline_functions="yes"],
[ompi_cv_cxx_finline_functions="no"])])
AC_LANG_POP(C++)
if test "$ompi_cv_cxx_finline_functions" = "yes" ; then
add=" -finline-functions"
fi
CXXFLAGS="$CXXFLAGS_orig$add"
OPAL_FLAGS_UNIQ(CXXFLAGS)
if test "$add" != "" ; then
AC_MSG_WARN([$add has been added to CXXFLAGS])
fi
unset add
fi
# Make sure we can link with the C compiler
if test "$ompi_cv_cxx_compiler_vendor" != "microsoft"; then
OPAL_LANG_LINK_WITH_C([C++], [],
[cat <<EOF >&2
**********************************************************************
@ -249,206 +79,12 @@ AC_DEFUN([_OMPI_SETUP_CXX_COMPILER_BACKEND],[
**********************************************************************
EOF
AC_MSG_ERROR([C and C++ compilers are not link compatible. Can not continue.])])
fi
# If we are on HP-UX, ensure that we're using aCC
case "$host" in
*hpux*)
if test "$BASECXX" = "CC"; then
AC_MSG_WARN([*** You will probably have problems compiling the MPI 2])
AC_MSG_WARN([*** C++ bindings with the HP-UX CC compiler. You should])
AC_MSG_WARN([*** probably be using the aCC compiler. Re-run configure])
AC_MSG_WARN([*** with the environment variable "CXX=aCC".])
fi
;;
esac
# Note: gcc-imperonating compilers accept -O3
if test "$WANT_DEBUG" = "1"; then
OPTFLAGS=
else
if test "$GXX" = yes; then
OPTFLAGS="-O3"
else
OPTFLAGS="-O"
fi
fi
# config/ompi_ensure_contains_optflags.m4
OPAL_ENSURE_CONTAINS_OPTFLAGS(["$CXXFLAGS"])
AC_MSG_CHECKING([for C++ optimization flags])
AC_MSG_RESULT([$co_result])
CXXFLAGS="$co_result"
# bool type size and alignment
AC_LANG_PUSH(C++)
AC_CHECK_SIZEOF(bool)
OPAL_C_GET_ALIGNMENT(bool, OPAL_ALIGNMENT_CXX_BOOL)
AC_LANG_POP(C++)
])
# _OMPI_CXX_CHECK_EXCEPTIONS()
# ----------------------------
# Check for exceptions, skipping the test if we don't want the C++
# bindings
AC_DEFUN([_OMPI_CXX_CHECK_EXCEPTIONS],[
# Check for special things due to C++ exceptions
ENABLE_CXX_EXCEPTIONS=no
HAVE_CXX_EXCEPTIONS=0
AC_ARG_ENABLE([cxx-exceptions],
[AC_HELP_STRING([--enable-cxx-exceptions],
[enable support for C++ exceptions (default: disabled)])],
[ENABLE_CXX_EXCEPTIONS="$enableval"])
AC_MSG_CHECKING([if want C++ exception handling])
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "0"],
[AS_IF([test "$$enable_cxx_exceptions" = "yes"],
[AC_MSG_RESULT([error])
AC_MSG_WARN([--enable-cxx-exceptions was specified, but the MPI C++ bindings were disabled])
AC_MSG_ERROR([Cannot continue])],
[AC_MSG_RESULT([skipped])])],
[_OMPI_CXX_CHECK_EXCEPTIONS_BACKEND])
AC_DEFINE_UNQUOTED(OMPI_HAVE_CXX_EXCEPTION_SUPPORT, $HAVE_CXX_EXCEPTIONS,
[Whether or not we have compiled with C++ exceptions support])
])
# _OMPI_CXX_CHECK_EXCEPTIONS_BACKEND()
# ------------------------------------
# Back end of _OMPI_CXX_CHECK_EXCEPTIONS
AC_DEFUN([_OMPI_CXX_CHECK_EXCEPTIONS_BACKEND],[
AC_MSG_RESULT([$ENABLE_CXX_EXCEPTIONS])
if test "$ENABLE_CXX_EXCEPTIONS" = "yes"; then
# config/cxx_have_exceptions.m4
OMPI_CXX_HAVE_EXCEPTIONS
# config/cxx_find_exception_flags.m4
OMPI_CXX_FIND_EXCEPTION_FLAGS
if test "$OMPI_CXX_EXCEPTIONS" = "1"; then
HAVE_CXX_EXCEPTIONS=1
# Test to see if the C compiler likes these flags
AC_MSG_CHECKING([to see if C compiler likes the exception flags])
CFLAGS="$CFLAGS $OMPI_CXX_EXCEPTIONS_CXXFLAGS"
AC_LANG_SAVE
AC_LANG_C
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[int i = 0;]])],
[AC_MSG_RESULT([yes])],
[AC_MSG_RESULT([no])
AC_MSG_WARN([C++ exception flags are different between the C and C++ compilers; this configure script cannot currently handle this scenario. Either disable C++ exception support or send mail to the Open MPI users list.])
AC_MSG_ERROR([*** Cannot continue])])
AC_LANG_RESTORE
# We can't test the F77 and F90 compilers now because we
# haven't found/set the up yet. So just save the flags
# and test them later (in ompi_setup_f77.m4 and
# ompi_setup_f90.m4).
CXXFLAGS="$CXXFLAGS $OMPI_CXX_EXCEPTIONS_CXXFLAGS"
LDFLAGS="$LDFLAGS $OMPI_CXX_EXCEPTIONS_LDFLAGS"
OPAL_WRAPPER_FLAGS_ADD([CFLAGS], [$OMPI_CXX_EXCEPTIONS_CXXFLAGS])
OPAL_WRAPPER_FLAGS_ADD([CXXFLAGS], [$OMPI_CXX_EXCEPTIONS_CXXFLAGS])
OPAL_WRAPPER_FLAGS_ADD([FCFLAGS], [$OMPI_CXX_EXCEPTIONS_CXXFLAGS])
fi
fi
])
# _OMPI_CXX_CHECK_BUILTIN
# -----------------------
# Check for __builtin_* stuff
AC_DEFUN([_OMPI_CXX_CHECK_BUILTIN],[
OPAL_VAR_SCOPE_PUSH([have_cxx_builtin_expect have_cxx_builtin_prefetch])
have_cxx_builtin_expect=0
have_cxx_builtin_prefetch=0
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "1"],
[_OMPI_CXX_CHECK_BUILTIN_BACKEND])
AC_DEFINE_UNQUOTED([OMPI_CXX_HAVE_BUILTIN_EXPECT],
[$have_cxx_builtin_expect],
[Whether C++ compiler supports __builtin_expect])
AC_DEFINE_UNQUOTED([OMPI_CXX_HAVE_BUILTIN_PREFETCH],
[$have_cxx_builtin_prefetch],
[Whether C++ compiler supports __builtin_prefetch])
OPAL_VAR_SCOPE_POP
])
# _OMPI_CXX_CHECK_BUILTIN_BACKEND
# -------------------------------
# Back end of _OMPI_CXX_CHECK_BUILTIN
AC_DEFUN([_OMPI_CXX_CHECK_BUILTIN_BACKEND],[
# see if the C++ compiler supports __builtin_expect
AC_LANG_PUSH(C++)
AC_CACHE_CHECK([if $CXX supports __builtin_expect],
[ompi_cv_cxx_supports___builtin_expect],
[AC_TRY_LINK([],
[void *ptr = (void*) 0;
if (__builtin_expect (ptr != (void*) 0, 1)) return 0;],
[ompi_cv_cxx_supports___builtin_expect="yes"],
[ompi_cv_cxx_supports___builtin_expect="no"])])
if test "$ompi_cv_cxx_supports___builtin_expect" = "yes" ; then
have_cxx_builtin_expect=1
else
have_cxx_builtin_expect=0
fi
AC_LANG_POP(C++)
# see if the C compiler supports __builtin_prefetch
AC_LANG_PUSH(C++)
AC_CACHE_CHECK([if $CXX supports __builtin_prefetch],
[ompi_cv_cxx_supports___builtin_prefetch],
[AC_TRY_LINK([],
[int ptr;
__builtin_prefetch(&ptr,0,0);],
[ompi_cv_cxx_supports___builtin_prefetch="yes"],
[ompi_cv_cxx_supports___builtin_prefetch="no"])])
if test "$ompi_cv_cxx_supports___builtin_prefetch" = "yes" ; then
have_cxx_builtin_prefetch=1
else
have_cxx_builtin_prefetch=0
fi
AC_LANG_POP(C++)
])
# _OMPI_CXX_CHECK_2D_CONST_CAST
# -----------------------------
# Check for compiler support of 2D const casts
AC_DEFUN([_OMPI_CXX_CHECK_2D_CONST_CAST],[
OPAL_VAR_SCOPE_PUSH([use_2d_const_cast])
use_2d_const_cast=0
AS_IF([test "$WANT_MPI_CXX_SUPPORT" = "1"],
[_OMPI_CXX_CHECK_2D_CONST_CAST_BACKEND])
AC_DEFINE_UNQUOTED([OMPI_CXX_SUPPORTS_2D_CONST_CAST],
[$use_2d_const_cast],
[Whether a const_cast on a 2-d array will work with the C++ compiler])
OPAL_VAR_SCOPE_POP
])
# _OMPI_CXX_CHECK_2D_CONST_CAST_BACKEND
# ---------------------------------
# Back end of _OMPI_CHECK_2D_CONST_CAST
AC_DEFUN([_OMPI_CXX_CHECK_2D_CONST_CAST_BACKEND],[
# see if the compiler supports const_cast of 2-dimensional arrays
AC_LANG_PUSH(C++)
AC_CACHE_CHECK([if $CXX supports const_cast<> properly],
[ompi_cv_cxx_supports_2d_const_cast],
[AC_TRY_COMPILE([int non_const_func(int ranges[][3]);
int cast_test(const int ranges[][3]) {
return non_const_func(const_cast<int(*)[3]>(ranges));
}],
[],
[ompi_cv_cxx_supports_2d_const_cast="yes"],
[ompi_cv_cxx_supports_2d_const_cast="no"])])
if test "$ompi_cv_cxx_supports_2d_const_cast" = "yes" ; then
use_2d_const_cast=1
fi
AC_LANG_POP(C++)
])

Просмотреть файл

@ -1,226 +0,0 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2006 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2008 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2006 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2006 Los Alamos National Security, LLC. All rights
dnl reserved.
dnl Copyright (c) 2007-2009 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2008-2013 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2015-2016 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
dnl
dnl $HEADER$
dnl
# This macro is necessary to get the title to be displayed first. :-)
AC_DEFUN([OPAL_SETUP_CXX_BANNER],[
opal_show_subtitle "C++ compiler and preprocessor"
])
# This macro is necessary because PROG_CXX* is REQUIREd by multiple
# places in SETUP_CXX.
AC_DEFUN([OPAL_PROG_CXX],[
OPAL_VAR_SCOPE_PUSH([opal_cxxflags_save])
opal_cxxflags_save="$CXXFLAGS"
AC_PROG_CXX
AC_PROG_CXXCPP
CXXFLAGS="$opal_cxxflags_save"
OPAL_VAR_SCOPE_POP
])
# OPAL_SETUP_CXX()
# ----------------
# Do everything required to setup the C++ compiler. Safe to AC_REQUIRE
# this macro.
AC_DEFUN([OPAL_SETUP_CXX],[
AC_REQUIRE([OPAL_SETUP_CXX_BANNER])
_OPAL_SETUP_CXX_COMPILER
OPAL_CXX_COMPILER_VENDOR([opal_cxx_vendor])
_OPAL_SETUP_CXX_COMPILER_BACKEND
])
# _OPAL_SETUP_CXX_COMPILER()
# --------------------------
# Setup the CXX compiler
AC_DEFUN([_OPAL_SETUP_CXX_COMPILER],[
OPAL_VAR_SCOPE_PUSH(opal_cxx_compiler_works)
# Must REQUIRE the PROG_CXX macro and not call it directly here for
# reasons well-described in the AC2.64 (and beyond) docs.
AC_REQUIRE([OPAL_PROG_CXX])
BASECXX="`basename $CXX`"
AS_IF([test "x$CXX" = "x"], [CXX=none])
set dummy $CXX
opal_cxx_argv0=[$]2
OPAL_WHICH([$opal_cxx_argv0], [OPAL_CXX_ABSOLUTE])
AS_IF([test "x$OPAL_CXX_ABSOLUTE" = "x"], [OPAL_CXX_ABSOLUTE=none])
AC_DEFINE_UNQUOTED(OPAL_CXX, "$CXX", [OPAL underlying C++ compiler])
AC_SUBST(OPAL_CXX_ABSOLUTE)
OPAL_VAR_SCOPE_POP
])
# _OPAL_SETUP_CXX_COMPILER_BACKEND()
# ----------------------------------
# Back end of _OPAL_SETUP_CXX_COMPILER_BACKEND()
AC_DEFUN([_OPAL_SETUP_CXX_COMPILER_BACKEND],[
# Do we want code coverage
if test "$WANT_COVERAGE" = "1"; then
if test "$opal_cxx_vendor" = "gnu" ; then
AC_MSG_WARN([$OPAL_COVERAGE_FLAGS has been added to CFLAGS (--enable-coverage)])
WANT_DEBUG=1
CXXFLAGS="${CXXFLAGS} $OPAL_COVERAGE_FLAGS"
OPAL_WRAPPER_FLAGS_ADD([CXXFLAGS], [$OPAL_COVERAGE_FLAGS])
else
AC_MSG_WARN([Code coverage functionality is currently available only with GCC suite])
AC_MSG_ERROR([Configure: cannot continue])
fi
fi
# Do we want debugging?
if test "$WANT_DEBUG" = "1" && test "$enable_debug_symbols" != "no" ; then
CXXFLAGS="$CXXFLAGS -g"
OPAL_FLAGS_UNIQ(CXXFLAGS)
AC_MSG_WARN([-g has been added to CXXFLAGS (--enable-debug)])
fi
# These flags are generally g++-specific; even the g++-impersonating
# compilers won't accept them.
OPAL_CXXFLAGS_BEFORE_PICKY="$CXXFLAGS"
if test "$WANT_PICKY_COMPILER" = 1 && test "$opal_cxx_vendor" = "gnu"; then
add="-Wall -Wundef -Wno-long-long"
# see if -Wno-long-double works...
AC_LANG_PUSH(C++)
CXXFLAGS_orig="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS $add -Wno-long-double -fstrict-prototype"
AC_CACHE_CHECK([if $CXX supports -Wno-long-double],
[opal_cv_cxx_wno_long_double],
[AC_TRY_COMPILE([], [],
[
dnl So -Wno-long-double did not produce any errors...
dnl We will try to extract a warning regarding
dnl unrecognized or ignored options
AC_TRY_COMPILE([], [long double test;],
[
opal_cv_cxx_wno_long_double="yes"
if test -s conftest.err ; then
dnl Yes, it should be "ignor", in order to catch ignoring and ignore
for i in unknown invalid ignor unrecognized ; do
$GREP -iq $i conftest.err
if test "$?" = "0" ; then
opal_cv_cxx_wno_long_double="no"
break;
fi
done
fi
],
[opal_cv_cxx_wno_long_double="no"])],
[opal_cv_cxx_wno_long_double="no"])
])
CXXFLAGS="$CXXFLAGS_orig"
AC_LANG_POP(C++)
if test "$opal_cv_cxx_wno_long_double" = "yes" ; then
add="$add -Wno-long-double"
fi
CXXFLAGS="$CXXFLAGS $add"
OPAL_FLAGS_UNIQ(CXXFLAGS)
if test "$add" != "" ; then
AC_MSG_WARN([$add has been added to CXXFLAGS (--enable-picky)])
fi
unset add
fi
# See if this version of g++ allows -finline-functions
if test "$GXX" = "yes"; then
CXXFLAGS_orig="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS -finline-functions"
add=
AC_LANG_PUSH(C++)
AC_CACHE_CHECK([if $CXX supports -finline-functions],
[opal_cv_cxx_finline_functions],
[AC_TRY_COMPILE([], [],
[opal_cv_cxx_finline_functions="yes"],
[opal_cv_cxx_finline_functions="no"])])
AC_LANG_POP(C++)
if test "$opal_cv_cxx_finline_functions" = "yes" ; then
add=" -finline-functions"
fi
CXXFLAGS="$CXXFLAGS_orig$add"
OPAL_FLAGS_UNIQ(CXXFLAGS)
if test "$add" != "" ; then
AC_MSG_WARN([$add has been added to CXXFLAGS])
fi
unset add
fi
# Make sure we can link with the C compiler
if test "$opal_cv_cxx_compiler_vendor" != "microsoft"; then
OPAL_LANG_LINK_WITH_C([C++], [],
[cat <<EOF >&2
**********************************************************************
* It appears that your C++ compiler is unable to link against object
* files created by your C compiler. This generally indicates either
* a conflict between the options specified in CFLAGS and CXXFLAGS
* or a problem with the local compiler installation. More
* information (including exactly what command was given to the
* compilers and what error resulted when the commands were executed) is
* available in the config.log file in this directory.
**********************************************************************
EOF
AC_MSG_ERROR([C and C++ compilers are not link compatible. Can not continue.])])
fi
# If we are on HP-UX, ensure that we're using aCC
case "$host" in
*hpux*)
if test "$BASECXX" = "CC"; then
AC_MSG_WARN([*** You will probably have problems compiling the MPI 2])
AC_MSG_WARN([*** C++ bindings with the HP-UX CC compiler. You should])
AC_MSG_WARN([*** probably be using the aCC compiler. Re-run configure])
AC_MSG_WARN([*** with the environment variable "CXX=aCC".])
fi
;;
esac
# Note: gcc-imperonating compilers accept -O3
if test "$WANT_DEBUG" = "1"; then
OPTFLAGS=
else
if test "$GXX" = yes; then
OPTFLAGS="-O3"
else
OPTFLAGS="-O"
fi
fi
# config/opal_ensure_contains_optflags.m4
OPAL_ENSURE_CONTAINS_OPTFLAGS(["$CXXFLAGS"])
AC_MSG_CHECKING([for C++ optimization flags])
AC_MSG_RESULT([$co_result])
CXXFLAGS="$co_result"
# bool type size and alignment
AC_LANG_PUSH(C++)
AC_CHECK_SIZEOF(bool)
OPAL_C_GET_ALIGNMENT(bool, OPAL_ALIGNMENT_CXX_BOOL)
AC_LANG_POP(C++)
])

Просмотреть файл

@ -51,12 +51,6 @@ EOF
dnl Print out the bindings if we are building OMPI
if test "$project_ompi_amc" = "true" ; then
if test x$enable_mpi_cxx = xyes ; then
echo "Build MPI C++ bindings (deprecated): yes"
else
echo "Build MPI C++ bindings (deprecated): no"
fi
if test $OMPI_BUILD_FORTRAN_BINDINGS = $OMPI_FORTRAN_MPIFH_BINDINGS ; then
echo "Build MPI Fortran bindings: mpif.h"
elif test $OMPI_BUILD_FORTRAN_BINDINGS = $OMPI_FORTRAN_USEMPI_BINDINGS ; then

Просмотреть файл

@ -10,7 +10,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2006-2019 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2006-2020 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2006-2008 Sun Microsystems, Inc. All rights reserved.
# Copyright (c) 2006-2017 Los Alamos National Security, LLC. All rights
# reserved.
@ -133,7 +133,6 @@ OPAL_SAVE_VERSION([OPAL], [Open Portable Access Layer], [$srcdir/VERSION],
. $srcdir/VERSION
m4_ifdef([project_ompi],
[AC_SUBST(libmpi_so_version)
AC_SUBST(libmpi_cxx_so_version)
AC_SUBST(libmpi_mpifh_so_version)
AC_SUBST(libmpi_usempi_tkr_so_version)
AC_SUBST(libmpi_usempi_ignore_tkr_so_version)
@ -537,18 +536,11 @@ OPAL_CHECK_OFFSETOF
# C++ compiler characteristics
##################################
# We don't need C++ unless we're building Open MPI and OPAL do
# not use C++ at all. The OPAL macro name appears to be a bit of a
# misnomer; I'm not sure why it was split into a second macro and put
# into OPAL...? All it does is setup the C++ compiler (the OMPI macro
# sets up the C++ MPI bindings, etc.). Perhaps it was moved to OPAL
# just on the rationale that all compiler setup should be done in
# OPAL...? Shrug.
m4_ifdef([project_ompi], [OPAL_SETUP_CXX
OMPI_SETUP_CXX])
# Used in Makefile.ompi-rules
AM_CONDITIONAL(MAN_PAGE_BUILD_MPI_CXX_BINDINGS,
[test "$WANT_MPI_CXX_SUPPORT" = 1])
# We don't need C++ unless we're building Open MPI, because Open MPI
# supports an "mpicxx" wrapper compiler (there is no C++ code in Open
# MPI -- the MPI C++ bindings were removed in Open MPI v5.0 -- so we
# don't need a C++ compiler for compiling Open MPI itself).
m4_ifdef([project_ompi], [OMPI_SETUP_CXX])
##################################
# Only after setting up both
@ -960,8 +952,6 @@ OPAL_CONFIG_THREADS
CFLAGS="$CFLAGS $THREAD_CFLAGS"
CPPFLAGS="$CPPFLAGS $THREAD_CPPFLAGS"
CXXFLAGS="$CXXFLAGS $THREAD_CXXFLAGS"
CXXCPPFLAGS="$CXXCPPFLAGS $THREAD_CXXCPPFLAGS"
LDFLAGS="$LDFLAGS $THREAD_LDFLAGS"
LIBS="$LIBS $THREAD_LIBS"
@ -1212,9 +1202,8 @@ fi
# compilers to "no" that we don't want. Libtool's m4 configry will
# interpret this as "I won't be using this language; don't bother
# setting it up." Note that we do this only for Fortran; we *don't*
# do this for C++, because even if we're not building the MPI C++
# bindings, we *do* still want to setup the mpicxx wrapper if we have
# a C++ compiler.
# do this for C++, because *do* still want to setup the mpicxx wrapper
# if we have a C++ compiler.
AS_IF([test "$OMPI_TRY_FORTRAN_BINDINGS" = "$OMPI_FORTRAN_NO_BINDINGS"],[F77=no FC=no])
LT_INIT([dlopen win32-dll])
@ -1278,23 +1267,17 @@ if test "$OMPI_TOP_BUILDDIR" != "$OMPI_TOP_SRCDIR"; then
# variables, lest the $(foo) names try to get evaluated here.
# Yuck!
CPPFLAGS='-I$(top_srcdir) -I$(top_builddir) -I$(top_srcdir)/opal/include m4_ifdef([project_ompi], [-I$(top_srcdir)/ompi/include]) m4_ifdef([project_oshmem], [-I$(top_srcdir)/oshmem/include])'" $CPPFLAGS"
# C++ is only relevant if we're building OMPI
m4_ifdef([project_ompi], [CXXCPPFLAGS='-I$(top_srcdir) -I$(top_builddir) -I$(top_srcdir)/opal/include -I$(top_srcdir)/ompi/include'" $CXXCPPFLAGS"])
else
CPPFLAGS='-I$(top_srcdir)'" $CPPFLAGS"
# C++ is only relevant if we're building OMPI
m4_ifdef([project_ompi], [CXXCPPFLAGS='-I$(top_srcdir)'" $CXXCPPFLAGS"])
fi
#
# Delayed the substitution of CFLAGS and CXXFLAGS until now because
# Delayed the substitution of CFLAGS and friends until now because
# they may have been modified throughout the course of this script.
#
AC_SUBST(CFLAGS)
AC_SUBST(CPPFLAGS)
AC_SUBST(CXXFLAGS)
AC_SUBST(CXXCPPFLAGS)
m4_ifdef([project_ompi], [AC_SUBST(FFLAGS)
AC_SUBST(FCFLAGS)

Просмотреть файл

@ -10,7 +10,7 @@
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2006-2007 Sun Microsystems, Inc. All rights reserved.
# Copyright (c) 2011-2016 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2011-2020 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2012 Los Alamos National Security, Inc. All rights reserved.
# Copyright (c) 2013 Mellanox Technologies, Inc. All rights reserved.
# Copyright (c) 2017-2018 Research Organization for Information Science
@ -25,7 +25,6 @@
# Use the Open MPI-provided wrapper compilers.
MPICC = mpicc
MPICXX = mpic++
MPIFC = mpifort
MPIJAVAC = mpijavac
SHMEMCC = shmemcc
@ -46,7 +45,6 @@ FCFLAGS += -g
EXAMPLES = \
hello_c \
hello_cxx \
hello_mpifh \
hello_usempi \
hello_usempif08 \
@ -55,7 +53,6 @@ EXAMPLES = \
hello_oshmemfh \
Hello.class \
ring_c \
ring_cxx \
ring_mpifh \
ring_usempi \
ring_usempif08 \
@ -86,9 +83,6 @@ all: hello_c ring_c connectivity_c spc_example
# MPI examples
mpi:
@ if ompi_info --parsable | grep -q bindings:cxx:yes >/dev/null; then \
$(MAKE) hello_cxx ring_cxx; \
fi
@ if ompi_info --parsable | grep -q bindings:mpif.h:yes >/dev/null; then \
$(MAKE) hello_mpifh ring_mpifh; \
fi
@ -136,11 +130,6 @@ connectivity_c: connectivity_c.c
spc_example: spc_example.c
$(MPICC) $(CFLAGS) $(LDFLAGS) $? $(LDLIBS) -o $@
hello_cxx: hello_cxx.cc
$(MPICXX) $(CXXFLAGS) $(LDFLAGS) $? $(LDLIBS) -o $@
ring_cxx: ring_cxx.cc
$(MPICXX) $(CXXFLAGS) $(LDFLAGS) $? $(LDLIBS) -o $@
hello_mpifh: hello_mpifh.f
$(MPIFC) $(FCFLAGS) $(LDFLAGS) $? $(LDLIBS) -o $@
ring_mpifh: ring_mpifh.f

Просмотреть файл

@ -10,7 +10,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2006-2012 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2006-2020 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
# Copyright (c) 2012 Los Alamos National Security, Inc. All rights reserved.
# Copyright (c) 2013 Mellanox Technologies, Inc. All rights reserved.
@ -36,7 +36,6 @@ EXTRA_DIST += \
examples/README \
examples/Makefile \
examples/hello_c.c \
examples/hello_cxx.cc \
examples/hello_mpifh.f \
examples/hello_usempi.f90 \
examples/hello_usempif08.f90 \
@ -44,7 +43,6 @@ EXTRA_DIST += \
examples/hello_oshmem_cxx.cc \
examples/hello_oshmemfh.f90 \
examples/ring_c.c \
examples/ring_cxx.cc \
examples/ring_mpifh.f \
examples/ring_usempi.f90 \
examples/ring_usempif08.f90 \

Просмотреть файл

@ -1,34 +0,0 @@
//
// Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
//
// Sample MPI "hello world" application in C++
//
// NOTE: The MPI C++ bindings were deprecated in MPI-2.2 and removed
// from the standard in MPI-3. Open MPI still provides C++ MPI
// bindings, but they are no longer built by default (and may be
// removed in a future version of Open MPI). You must
// --enable-mpi-cxx when configuring Open MPI to enable the MPI C++
// bindings.
//
#include "mpi.h"
#include <iostream>
int main(int argc, char **argv)
{
int rank, size, len;
char version[MPI_MAX_LIBRARY_VERSION_STRING];
MPI::Init();
rank = MPI::COMM_WORLD.Get_rank();
size = MPI::COMM_WORLD.Get_size();
MPI_Get_library_version(version, &len);
std::cout << "Hello, world! I am " << rank << " of " << size
<< "(" << version << ", " << len << ")" << std::endl;
MPI::Finalize();
return 0;
}

Просмотреть файл

@ -1,85 +0,0 @@
//
// Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
//
// Simple ring test program in C++.
//
// NOTE: The MPI C++ bindings were deprecated in MPI-2.2 and removed
// from the standard in MPI-3. Open MPI still provides C++ MPI
// bindings, but they are no longer built by default (and may be
// removed in a future version of Open MPI). You must
// --enable-mpi-cxx when configuring Open MPI to enable the MPI C++
// bindings.
//
#include "mpi.h"
#include <iostream>
int main(int argc, char *argv[])
{
int rank, size, next, prev, message, tag = 201;
// Start up MPI
MPI::Init();
rank = MPI::COMM_WORLD.Get_rank();
size = MPI::COMM_WORLD.Get_size();
// Calculate the rank of the next process in the ring. Use the
// modulus operator so that the last process "wraps around" to
// rank zero.
next = (rank + 1) % size;
prev = (rank + size - 1) % size;
// If we are the "master" process (i.e., MPI_COMM_WORLD rank 0),
// put the number of times to go around the ring in the message.
if (0 == rank) {
message = 10;
std::cout << "Process 0 sending " << message << " to " << next
<< ", tag " << tag << " (" << size << " processes in ring)"
<< std::endl;
MPI::COMM_WORLD.Send(&message, 1, MPI::INT, next, tag);
std::cout << "Process 0 sent to " << next << std::endl;
}
// Pass the message around the ring. The exit mechanism works as
// follows: the message (a positive integer) is passed around the
// ring. Each time it passes rank 0, it is decremented. When
// each processes receives a message containing a 0 value, it
// passes the message on to the next process and then quits. By
// passing the 0 message first, every process gets the 0 message
// and can quit normally.
while (1) {
MPI::COMM_WORLD.Recv(&message, 1, MPI::INT, prev, tag);
if (0 == rank) {
--message;
std::cout << "Process 0 decremented value: " << message
<< std::endl;
}
MPI::COMM_WORLD.Send(&message, 1, MPI::INT, next, tag);
if (0 == message) {
std::cout << "Process " << rank << " exiting" << std::endl;
break;
}
}
// The last process does one extra send to process 0, which needs
// to be received before the program can exit */
if (0 == rank) {
MPI::COMM_WORLD.Recv(&message, 1, MPI::INT, prev, tag);
}
// All done
MPI::Finalize();
return 0;
}

Просмотреть файл

@ -43,10 +43,10 @@ else
mpi_fortran_base_lib =
endif
# Note that the ordering of "." in SUBDIRS is important: the C++,
# Fortran mpif.h, and use mpi/use mpi_f08 bindings are all in
# standalone .la files that depend on libmpi.la. So we must fully
# build libmpi.la first.
# Note that the ordering of "." in SUBDIRS is important: the Fortran
# mpif.h, and use mpi/use mpi_f08 bindings are all in standalone .la
# files that depend on libmpi.la. So we must fully build libmpi.la
# first.
# NOTE: A handful of files in mpi/fortran/base must be included in
# libmpi.la. But we wanted to keep all the Fortran sources together
@ -66,8 +66,8 @@ endif
# unfortunately).
# The end of the result is that libmpi.la -- including a few sources
# from mpi/fortran/base -- is fully built before the C++, mpif.h, and
# use mpi/use mpi_f08 bindings are built. Therefore, the C++, mpif.h
# from mpi/fortran/base -- is fully built before the mpif.h, and
# use mpi/use mpi_f08 bindings are built. Therefore, the mpif.h
# and use mpi/use mpi_f08 bindings libraries can all link against
# libmpi.la.
@ -86,7 +86,6 @@ SUBDIRS = \
$(MCA_ompi_FRAMEWORKS_SUBDIRS) \
$(MCA_ompi_FRAMEWORK_COMPONENT_STATIC_SUBDIRS) \
. \
mpi/cxx \
$(OMPI_MPIEXT_MPIFH_DIRS) \
mpi/fortran/mpif-h \
$(OMPI_MPIEXT_USEMPI_DIR) \
@ -118,7 +117,6 @@ DIST_SUBDIRS = \
etc \
mpi/c \
mpi/tool \
mpi/cxx \
mpi/fortran/base \
mpi/fortran/mpif-h \
mpi/fortran/use-mpi-tkr \

Просмотреть файл

@ -167,15 +167,6 @@
/* type to use for ptrdiff_t, if it does not exist, set to ptrdiff_t if it does exist */
#undef ptrdiff_t
/* Whether we want MPI cxx support or not */
#undef OMPI_BUILD_CXX_BINDINGS
/* do we want to try to work around C++ bindings SEEK_* issue? */
#undef OMPI_WANT_MPI_CXX_SEEK
/* Whether a const_cast on a 2-d array will work with the C++ compiler */
#undef OMPI_CXX_SUPPORTS_2D_CONST_CAST
/* Whether OMPI was built with parameter checking or not */
#undef OMPI_PARAM_CHECK
@ -184,9 +175,6 @@
#undef OMPI_WANT_MPI_INTERFACE_WARNING
#endif
/* Whether or not we have compiled with C++ exceptions support */
#undef OMPI_HAVE_CXX_EXCEPTION_SUPPORT
/* Major, minor, and release version of Open MPI */
#undef OMPI_MAJOR_VERSION
#undef OMPI_MINOR_VERSION
@ -254,9 +242,7 @@
* only relevant if we're not building Open MPI (i.e., we're compiling an
* MPI application).
*/
#if !(OMPI_BUILDING || \
(defined(OMPI_BUILDING_CXX_BINDINGS_LIBRARY) && \
OMPI_BUILDING_CXX_BINDINGS_LIBRARY))
#if !OMPI_BUILDING \
/*
* Figure out which compiler is being invoked (in order to compare if
@ -2832,18 +2818,4 @@ OMPI_DECLSPEC int PMPI_Type_ub(MPI_Datatype mtype, MPI_Aint *ub)
}
#endif
/*
* Conditional MPI 2 C++ bindings support. Include if:
* - The user does not explicitly request us to skip it (when a C++ compiler
* is used to compile C code).
* - We want C++ bindings support
* - We are not building OMPI itself
* - We are using a C++ compiler
*/
#if !defined(OMPI_SKIP_MPICXX) && OMPI_BUILD_CXX_BINDINGS && !OMPI_BUILDING
#if defined(c_plusplus) || defined(__cplusplus)
#include "openmpi/ompi/mpi/cxx/mpicxx.h"
#endif
#endif
#endif /* OMPI_MPI_H */

Просмотреть файл

@ -1,85 +0,0 @@
# -*- makefile -*-
#
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
# University Research and Technology
# Corporation. All rights reserved.
# Copyright (c) 2004-2005 The University of Tennessee and The University
# of Tennessee Research Foundation. All rights
# reserved.
# Copyright (c) 2004-2009 High Performance Computing Center Stuttgart,
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2016 IBM Corporation. All rights reserved.
# Copyright (c) 2017 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
#
# $HEADER$
#
# Need the first so that we can get the path names correct inside the
# MPI C++ library. The second is necessary so that mpi.h doesn't
# include mpicxx.h through the incorrect pathname in any of the C++
# bindings .c files. Just use the define for this purpose from user
# code.
AM_CPPFLAGS = -DOMPI_BUILDING_CXX_BINDINGS_LIBRARY=1 -DOMPI_SKIP_MPICXX=1
if OMPI_BUILD_MPI_CXX_BINDINGS
mpi_lib = lib@OMPI_LIBMPI_NAME@_cxx.la
lib_LTLIBRARIES = lib@OMPI_LIBMPI_NAME@_cxx.la
lib@OMPI_LIBMPI_NAME@_cxx_la_SOURCES = \
mpicxx.cc \
intercepts.cc \
comm.cc \
datatype.cc \
file.cc \
win.cc \
cxx_glue.c
lib@OMPI_LIBMPI_NAME@_cxx_la_LIBADD = $(top_builddir)/ompi/lib@OMPI_LIBMPI_NAME@.la
lib@OMPI_LIBMPI_NAME@_cxx_la_LDFLAGS = -version-info $(libmpi_cxx_so_version)
headers = \
mpicxx.h \
constants.h \
file.h \
functions.h \
datatype.h \
exception.h \
op.h \
status.h \
request.h \
group.h \
comm.h \
errhandler.h \
intracomm.h \
info.h \
win.h \
topology.h \
intercomm.h \
datatype_inln.h \
file_inln.h \
functions_inln.h \
request_inln.h \
comm_inln.h \
intracomm_inln.h \
info_inln.h \
win_inln.h \
topology_inln.h \
intercomm_inln.h \
group_inln.h \
op_inln.h \
errhandler_inln.h \
status_inln.h \
cxx_glue.h
ompidir = $(ompiincludedir)/ompi/mpi/cxx
ompi_HEADERS = \
$(headers)
endif

Просмотреть файл

@ -1,135 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2007-2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2016 Los Alamos National Security, LLC. All rights
// reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// do not include ompi_config.h because it kills the free/malloc defines
#include "mpi.h"
#include "ompi/constants.h"
#include "ompi/mpi/cxx/mpicxx.h"
#include "cxx_glue.h"
//
// These functions are all not inlined because they need to use locks to
// protect the handle maps and it would be bad to have those in headers
// because that would require that we always install the lock headers.
// Instead we take the function call hit (we're locking - who cares about
// a function call. And these aren't exactly the performance critical
// functions) and make everyone's life easier.
//
// construction
MPI::Comm::Comm()
{
}
// copy
MPI::Comm::Comm(const Comm_Null& data) : Comm_Null(data)
{
}
// This function needs some internal OMPI types, so it's not inlined
MPI::Errhandler
MPI::Comm::Create_errhandler(MPI::Comm::_MPI2CPP_ERRHANDLERFN_* function)
{
return ompi_cxx_errhandler_create_comm ((ompi_cxx_dummy_fn_t *) function);
}
//JGS I took the const out because it causes problems when trying to
//call this function with the predefined NULL_COPY_FN etc.
int
MPI::Comm::do_create_keyval(MPI_Comm_copy_attr_function* c_copy_fn,
MPI_Comm_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval)
{
int ret, count = 0;
keyval_intercept_data_t *cxx_extra_state;
// If both the callbacks are C, then do the simple thing -- no
// need for all the C++ machinery.
if (NULL != c_copy_fn && NULL != c_delete_fn) {
ret = ompi_cxx_attr_create_keyval_comm (c_copy_fn, c_delete_fn, &keyval,
extra_state, 0, NULL);
if (MPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm(MPI_COMM_WORLD, ret,
"MPI::Comm::Create_keyval");
}
}
// If either callback is C++, then we have to use the C++
// callbacks for both, because we have to generate a new
// extra_state. And since we only get one extra_state (i.e., we
// don't get one extra_state for the copy callback and another
// extra_state for the delete callback), we have to use the C++
// callbacks for both (and therefore translate the C++-special
// extra_state into the user's original extra_state). Ensure to
// malloc() the struct here (vs new) so that it can be free()'ed
// by the C attribute base.
cxx_extra_state = (keyval_intercept_data_t*)
malloc(sizeof(keyval_intercept_data_t));
if (NULL == cxx_extra_state) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_NO_MEM,
"MPI::Comm::Create_keyval");
}
cxx_extra_state->c_copy_fn = c_copy_fn;
cxx_extra_state->cxx_copy_fn = cxx_copy_fn;
cxx_extra_state->c_delete_fn = c_delete_fn;
cxx_extra_state->cxx_delete_fn = cxx_delete_fn;
cxx_extra_state->extra_state = extra_state;
// Error check. Must have exactly 2 non-NULL function pointers.
if (NULL != c_copy_fn) {
++count;
}
if (NULL != c_delete_fn) {
++count;
}
if (NULL != cxx_copy_fn) {
++count;
}
if (NULL != cxx_delete_fn) {
++count;
}
if (2 != count) {
free(cxx_extra_state);
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_ARG,
"MPI::Comm::Create_keyval");
}
// We do not call MPI_Comm_create_keyval() here because we need to
// pass in the cxx_extra_state to the backend keyval creation so
// that when the keyval is destroyed (i.e., when its refcount goes
// to 0), the cxx_extra_state is free()'ed.
ret = ompi_cxx_attr_create_keyval_comm ((MPI_Comm_copy_attr_function *) ompi_mpi_cxx_comm_copy_attr_intercept,
ompi_mpi_cxx_comm_delete_attr_intercept,
&keyval, cxx_extra_state, 0, cxx_extra_state);
if (OMPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, ret,
"MPI::Comm::Create_keyval");
}
return MPI_SUCCESS;
}

Просмотреть файл

@ -1,465 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Comm_Null {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Comm_Null;
#endif
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction
inline Comm_Null() { }
// copy
inline Comm_Null(const Comm_Null& data) : pmpi_comm(data.pmpi_comm) { }
// inter-language operability
inline Comm_Null(MPI_Comm data) : pmpi_comm(data) { }
inline Comm_Null(const PMPI::Comm_Null& data) : pmpi_comm(data) { }
// destruction
virtual inline ~Comm_Null() { }
inline Comm_Null& operator=(const Comm_Null& data) {
pmpi_comm = data.pmpi_comm;
return *this;
}
// comparison
inline bool operator==(const Comm_Null& data) const {
return (bool) (pmpi_comm == data.pmpi_comm); }
inline bool operator!=(const Comm_Null& data) const {
return (bool) (pmpi_comm != data.pmpi_comm);}
// inter-language operability (conversion operators)
inline operator MPI_Comm() const { return pmpi_comm; }
// inline operator MPI_Comm*() /*const JGS*/ { return pmpi_comm; }
inline operator const PMPI::Comm_Null&() const { return pmpi_comm; }
#else
// construction
inline Comm_Null() : mpi_comm(MPI_COMM_NULL) { }
// copy
inline Comm_Null(const Comm_Null& data) : mpi_comm(data.mpi_comm) { }
// inter-language operability
inline Comm_Null(MPI_Comm data) : mpi_comm(data) { }
// destruction
virtual inline ~Comm_Null() { }
// comparison
// JGS make sure this is right (in other classes too)
inline bool operator==(const Comm_Null& data) const {
return (bool) (mpi_comm == data.mpi_comm); }
inline bool operator!=(const Comm_Null& data) const {
return (bool) !(*this == data);}
// inter-language operability (conversion operators)
inline operator MPI_Comm() const { return mpi_comm; }
#endif
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Comm_Null pmpi_comm;
#else
MPI_Comm mpi_comm;
#endif
};
class Comm : public Comm_Null {
public:
typedef void Errhandler_function(Comm&, int*, ...);
typedef Errhandler_function Errhandler_fn
__mpi_interface_deprecated__("MPI::Comm::Errhandler_fn was deprecated in MPI-2.2; use MPI::Comm::Errhandler_function instead");
typedef int Copy_attr_function(const Comm& oldcomm, int comm_keyval,
void* extra_state, void* attribute_val_in,
void* attribute_val_out,
bool& flag);
typedef int Delete_attr_function(Comm& comm, int comm_keyval,
void* attribute_val,
void* extra_state);
#if !0 /* OMPI_ENABLE_MPI_PROFILING */
#define _MPI2CPP_ERRHANDLERFN_ Errhandler_function
#define _MPI2CPP_COPYATTRFN_ Copy_attr_function
#define _MPI2CPP_DELETEATTRFN_ Delete_attr_function
#endif
// construction
Comm();
// copy
Comm(const Comm_Null& data);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
Comm(const Comm& data) :
Comm_Null(data),
pmpi_comm((const PMPI::Comm&) data) { }
// inter-language operability
Comm(MPI_Comm data) : Comm_Null(data), pmpi_comm(data) { }
Comm(const PMPI::Comm& data) :
Comm_Null((const PMPI::Comm_Null&)data),
pmpi_comm(data) { }
operator const PMPI::Comm&() const { return pmpi_comm; }
// assignment
Comm& operator=(const Comm& data) {
this->Comm_Null::operator=(data);
pmpi_comm = data.pmpi_comm;
return *this;
}
Comm& operator=(const Comm_Null& data) {
this->Comm_Null::operator=(data);
MPI_Comm tmp = data;
pmpi_comm = tmp;
return *this;
}
// inter-language operability
Comm& operator=(const MPI_Comm& data) {
this->Comm_Null::operator=(data);
pmpi_comm = data;
return *this;
}
#else
Comm(const Comm& data) : Comm_Null(data.mpi_comm) { }
// inter-language operability
Comm(MPI_Comm data) : Comm_Null(data) { }
#endif
//
// Point-to-Point
//
virtual void Send(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual void Recv(void *buf, int count, const Datatype & datatype,
int source, int tag, Status & status) const;
virtual void Recv(void *buf, int count, const Datatype & datatype,
int source, int tag) const;
virtual void Bsend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual void Ssend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const ;
virtual void Rsend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual Request Isend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual Request Ibsend(const void *buf, int count, const
Datatype & datatype, int dest, int tag) const;
virtual Request Issend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual Request Irsend(const void *buf, int count,
const Datatype & datatype, int dest, int tag) const;
virtual Request Irecv(void *buf, int count,
const Datatype & datatype, int source, int tag) const;
virtual bool Iprobe(int source, int tag, Status & status) const;
virtual bool Iprobe(int source, int tag) const;
virtual void Probe(int source, int tag, Status & status) const;
virtual void Probe(int source, int tag) const;
virtual Prequest Send_init(const void *buf, int count,
const Datatype & datatype, int dest,
int tag) const;
virtual Prequest Bsend_init(const void *buf, int count,
const Datatype & datatype, int dest,
int tag) const;
virtual Prequest Ssend_init(const void *buf, int count,
const Datatype & datatype, int dest,
int tag) const;
virtual Prequest Rsend_init(const void *buf, int count,
const Datatype & datatype, int dest,
int tag) const;
virtual Prequest Recv_init(void *buf, int count,
const Datatype & datatype, int source,
int tag) const;
virtual void Sendrecv(const void *sendbuf, int sendcount,
const Datatype & sendtype, int dest, int sendtag,
void *recvbuf, int recvcount,
const Datatype & recvtype, int source,
int recvtag, Status & status) const;
virtual void Sendrecv(const void *sendbuf, int sendcount,
const Datatype & sendtype, int dest, int sendtag,
void *recvbuf, int recvcount,
const Datatype & recvtype, int source,
int recvtag) const;
virtual void Sendrecv_replace(void *buf, int count,
const Datatype & datatype, int dest,
int sendtag, int source,
int recvtag, Status & status) const;
virtual void Sendrecv_replace(void *buf, int count,
const Datatype & datatype, int dest,
int sendtag, int source,
int recvtag) const;
//
// Groups, Contexts, and Communicators
//
virtual Group Get_group() const;
virtual int Get_size() const;
virtual int Get_rank() const;
static int Compare(const Comm & comm1, const Comm & comm2);
virtual Comm& Clone() const = 0;
virtual void Free(void);
virtual bool Is_inter() const;
//
// Collective Communication
//
// Up in Comm because as of MPI-2, they are common to intracomm and
// intercomm -- with the exception of Scan and Exscan, which are not
// defined on intercomms.
//
virtual void
Barrier() const;
virtual void
Bcast(void *buffer, int count,
const Datatype& datatype, int root) const;
virtual void
Gather(const void *sendbuf, int sendcount,
const Datatype & sendtype,
void *recvbuf, int recvcount,
const Datatype & recvtype, int root) const;
virtual void
Gatherv(const void *sendbuf, int sendcount,
const Datatype & sendtype, void *recvbuf,
const int recvcounts[], const int displs[],
const Datatype & recvtype, int root) const;
virtual void
Scatter(const void *sendbuf, int sendcount,
const Datatype & sendtype,
void *recvbuf, int recvcount,
const Datatype & recvtype, int root) const;
virtual void
Scatterv(const void *sendbuf, const int sendcounts[],
const int displs[], const Datatype & sendtype,
void *recvbuf, int recvcount,
const Datatype & recvtype, int root) const;
virtual void
Allgather(const void *sendbuf, int sendcount,
const Datatype & sendtype, void *recvbuf,
int recvcount, const Datatype & recvtype) const;
virtual void
Allgatherv(const void *sendbuf, int sendcount,
const Datatype & sendtype, void *recvbuf,
const int recvcounts[], const int displs[],
const Datatype & recvtype) const;
virtual void
Alltoall(const void *sendbuf, int sendcount,
const Datatype & sendtype, void *recvbuf,
int recvcount, const Datatype & recvtype) const;
virtual void
Alltoallv(const void *sendbuf, const int sendcounts[],
const int sdispls[], const Datatype & sendtype,
void *recvbuf, const int recvcounts[],
const int rdispls[], const Datatype & recvtype) const;
virtual void
Alltoallw(const void *sendbuf, const int sendcounts[],
const int sdispls[], const Datatype sendtypes[],
void *recvbuf, const int recvcounts[],
const int rdispls[], const Datatype recvtypes[]) const;
virtual void
Reduce(const void *sendbuf, void *recvbuf, int count,
const Datatype & datatype, const Op & op,
int root) const;
virtual void
Allreduce(const void *sendbuf, void *recvbuf, int count,
const Datatype & datatype, const Op & op) const;
virtual void
Reduce_scatter(const void *sendbuf, void *recvbuf,
int recvcounts[],
const Datatype & datatype,
const Op & op) const;
//
// Process Creation
//
virtual void Disconnect();
static Intercomm Get_parent();
static Intercomm Join(const int fd);
//
// External Interfaces
//
virtual void Get_name(char * comm_name, int& resultlen) const;
virtual void Set_name(const char* comm_name);
//
// Process Topologies
//
virtual int Get_topology() const;
//
// Environmental Inquiry
//
virtual void Abort(int errorcode);
//
// Errhandler
//
static Errhandler Create_errhandler(Comm::Errhandler_function* function);
virtual void Set_errhandler(const Errhandler& errhandler);
virtual Errhandler Get_errhandler() const;
void Call_errhandler(int errorcode) const;
//
// Keys and Attributes
//
// Need 4 overloaded versions of this function because per the
// MPI-2 spec, you can mix-n-match the C predefined functions with
// C++ functions.
static int Create_keyval(Copy_attr_function* comm_copy_attr_fn,
Delete_attr_function* comm_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Comm_copy_attr_function* comm_copy_attr_fn,
MPI_Comm_delete_attr_function* comm_delete_attr_fn,
void* extra_state);
static int Create_keyval(Copy_attr_function* comm_copy_attr_fn,
MPI_Comm_delete_attr_function* comm_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Comm_copy_attr_function* comm_copy_attr_fn,
Delete_attr_function* comm_delete_attr_fn,
void* extra_state);
protected:
static int do_create_keyval(MPI_Comm_copy_attr_function* c_copy_fn,
MPI_Comm_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval);
public:
static void Free_keyval(int& comm_keyval);
virtual void Set_attr(int comm_keyval, const void* attribute_val) const;
virtual bool Get_attr(int comm_keyval, void* attribute_val) const;
virtual void Delete_attr(int comm_keyval);
static int NULL_COPY_FN(const Comm& oldcomm, int comm_keyval,
void* extra_state, void* attribute_val_in,
void* attribute_val_out, bool& flag);
static int DUP_FN(const Comm& oldcomm, int comm_keyval,
void* extra_state, void* attribute_val_in,
void* attribute_val_out, bool& flag);
static int NULL_DELETE_FN(Comm& comm, int comm_keyval, void* attribute_val,
void* extra_state);
private:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Comm pmpi_comm;
#endif
#if ! 0 /* OMPI_ENABLE_MPI_PROFILING */
public:
// Data that is passed through keyval create when C++ callback
// functions are used
struct keyval_intercept_data_t {
MPI_Comm_copy_attr_function *c_copy_fn;
MPI_Comm_delete_attr_function *c_delete_fn;
Copy_attr_function* cxx_copy_fn;
Delete_attr_function* cxx_delete_fn;
void *extra_state;
};
// Protect the global list from multiple thread access
static opal_mutex_t cxx_extra_states_lock;
#endif
};

Просмотреть файл

@ -1,689 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2007-2016 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Point-to-Point
//
inline void
MPI::Comm::Send(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
(void)MPI_Send(const_cast<void *>(buf), count, datatype, dest, tag, mpi_comm);
}
inline void
MPI::Comm::Recv(void *buf, int count, const MPI::Datatype & datatype,
int source, int tag, MPI::Status & status) const
{
(void)MPI_Recv(buf, count, datatype, source, tag, mpi_comm, &status.mpi_status);
}
inline void
MPI::Comm::Recv(void *buf, int count, const MPI::Datatype & datatype,
int source, int tag) const
{
(void)MPI_Recv(buf, count, datatype, source,
tag, mpi_comm, MPI_STATUS_IGNORE);
}
inline void
MPI::Comm::Bsend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
(void)MPI_Bsend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm);
}
inline void
MPI::Comm::Ssend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
(void)MPI_Ssend(const_cast<void *>(buf), count, datatype, dest,
tag, mpi_comm);
}
inline void
MPI::Comm::Rsend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
(void)MPI_Rsend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm);
}
inline MPI::Request
MPI::Comm::Isend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Isend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Request
MPI::Comm::Ibsend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Ibsend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Request
MPI::Comm::Issend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Issend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Request
MPI::Comm::Irsend(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Irsend(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Request
MPI::Comm::Irecv(void *buf, int count,
const MPI::Datatype & datatype, int source, int tag) const
{
MPI_Request request;
(void)MPI_Irecv(buf, count, datatype, source,
tag, mpi_comm, &request);
return request;
}
inline bool
MPI::Comm::Iprobe(int source, int tag, MPI::Status & status) const
{
int t;
(void)MPI_Iprobe(source, tag, mpi_comm, &t, &status.mpi_status);
return OPAL_INT_TO_BOOL(t);
}
inline bool
MPI::Comm::Iprobe(int source, int tag) const
{
int t;
(void)MPI_Iprobe(source, tag, mpi_comm, &t, MPI_STATUS_IGNORE);
return OPAL_INT_TO_BOOL(t);
}
inline void
MPI::Comm::Probe(int source, int tag, MPI::Status & status) const
{
(void)MPI_Probe(source, tag, mpi_comm, &status.mpi_status);
}
inline void
MPI::Comm::Probe(int source, int tag) const
{
(void)MPI_Probe(source, tag, mpi_comm, MPI_STATUS_IGNORE);
}
inline MPI::Prequest
MPI::Comm::Send_init(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Send_init(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Prequest
MPI::Comm::Bsend_init(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Bsend_init(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Prequest
MPI::Comm::Ssend_init(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Ssend_init(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Prequest
MPI::Comm::Rsend_init(const void *buf, int count,
const MPI::Datatype & datatype, int dest, int tag) const
{
MPI_Request request;
(void)MPI_Rsend_init(const_cast<void *>(buf), count, datatype,
dest, tag, mpi_comm, &request);
return request;
}
inline MPI::Prequest
MPI::Comm::Recv_init(void *buf, int count,
const MPI::Datatype & datatype, int source, int tag) const
{
MPI_Request request;
(void)MPI_Recv_init(buf, count, datatype, source,
tag, mpi_comm, &request);
return request;
}
inline void
MPI::Comm::Sendrecv(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, int dest, int sendtag,
void *recvbuf, int recvcount,
const MPI::Datatype & recvtype, int source,
int recvtag, MPI::Status & status) const
{
(void)MPI_Sendrecv(const_cast<void *>(sendbuf), sendcount,
sendtype,
dest, sendtag, recvbuf, recvcount,
recvtype,
source, recvtag, mpi_comm, &status.mpi_status);
}
inline void
MPI::Comm::Sendrecv(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, int dest, int sendtag,
void *recvbuf, int recvcount,
const MPI::Datatype & recvtype, int source,
int recvtag) const
{
(void)MPI_Sendrecv(const_cast<void *>(sendbuf), sendcount,
sendtype,
dest, sendtag, recvbuf, recvcount,
recvtype,
source, recvtag, mpi_comm, MPI_STATUS_IGNORE);
}
inline void
MPI::Comm::Sendrecv_replace(void *buf, int count,
const MPI::Datatype & datatype, int dest,
int sendtag, int source,
int recvtag, MPI::Status & status) const
{
(void)MPI_Sendrecv_replace(buf, count, datatype, dest,
sendtag, source, recvtag, mpi_comm,
&status.mpi_status);
}
inline void
MPI::Comm::Sendrecv_replace(void *buf, int count,
const MPI::Datatype & datatype, int dest,
int sendtag, int source,
int recvtag) const
{
(void)MPI_Sendrecv_replace(buf, count, datatype, dest,
sendtag, source, recvtag, mpi_comm,
MPI_STATUS_IGNORE);
}
//
// Groups, Contexts, and Communicators
//
inline MPI::Group
MPI::Comm::Get_group() const
{
MPI_Group group;
(void)MPI_Comm_group(mpi_comm, &group);
return group;
}
inline int
MPI::Comm::Get_size() const
{
int size;
(void)MPI_Comm_size (mpi_comm, &size);
return size;
}
inline int
MPI::Comm::Get_rank() const
{
int myrank;
(void)MPI_Comm_rank (mpi_comm, &myrank);
return myrank;
}
inline int
MPI::Comm::Compare(const MPI::Comm & comm1,
const MPI::Comm & comm2)
{
int result;
(void)MPI_Comm_compare(comm1, comm2, &result);
return result;
}
inline void
MPI::Comm::Free(void)
{
(void)MPI_Comm_free(&mpi_comm);
}
inline bool
MPI::Comm::Is_inter() const
{
int t;
(void)MPI_Comm_test_inter(mpi_comm, &t);
return OPAL_INT_TO_BOOL(t);
}
//
// Collective Communication
//
inline void
MPI::Comm::Barrier() const
{
(void)MPI_Barrier(mpi_comm);
}
inline void
MPI::Comm::Bcast(void *buffer, int count,
const MPI::Datatype& datatype, int root) const
{
(void)MPI_Bcast(buffer, count, datatype, root, mpi_comm);
}
inline void
MPI::Comm::Gather(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype,
void *recvbuf, int recvcount,
const MPI::Datatype & recvtype, int root) const
{
(void)MPI_Gather(const_cast<void *>(sendbuf), sendcount, sendtype,
recvbuf, recvcount, recvtype, root, mpi_comm);
}
inline void
MPI::Comm::Gatherv(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, void *recvbuf,
const int recvcounts[], const int displs[],
const MPI::Datatype & recvtype, int root) const
{
(void)MPI_Gatherv(const_cast<void *>(sendbuf), sendcount, sendtype,
recvbuf, const_cast<int *>(recvcounts),
const_cast<int *>(displs),
recvtype, root, mpi_comm);
}
inline void
MPI::Comm::Scatter(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype,
void *recvbuf, int recvcount,
const MPI::Datatype & recvtype, int root) const
{
(void)MPI_Scatter(const_cast<void *>(sendbuf), sendcount, sendtype,
recvbuf, recvcount, recvtype, root, mpi_comm);
}
inline void
MPI::Comm::Scatterv(const void *sendbuf, const int sendcounts[],
const int displs[], const MPI::Datatype & sendtype,
void *recvbuf, int recvcount,
const MPI::Datatype & recvtype, int root) const
{
(void)MPI_Scatterv(const_cast<void *>(sendbuf),
const_cast<int *>(sendcounts),
const_cast<int *>(displs), sendtype,
recvbuf, recvcount, recvtype,
root, mpi_comm);
}
inline void
MPI::Comm::Allgather(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, void *recvbuf,
int recvcount, const MPI::Datatype & recvtype) const
{
(void)MPI_Allgather(const_cast<void *>(sendbuf), sendcount,
sendtype, recvbuf, recvcount,
recvtype, mpi_comm);
}
inline void
MPI::Comm::Allgatherv(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, void *recvbuf,
const int recvcounts[], const int displs[],
const MPI::Datatype & recvtype) const
{
(void)MPI_Allgatherv(const_cast<void *>(sendbuf), sendcount,
sendtype, recvbuf,
const_cast<int *>(recvcounts),
const_cast<int *>(displs),
recvtype, mpi_comm);
}
inline void
MPI::Comm::Alltoall(const void *sendbuf, int sendcount,
const MPI::Datatype & sendtype, void *recvbuf,
int recvcount, const MPI::Datatype & recvtype) const
{
(void)MPI_Alltoall(const_cast<void *>(sendbuf), sendcount,
sendtype, recvbuf, recvcount,
recvtype, mpi_comm);
}
inline void
MPI::Comm::Alltoallv(const void *sendbuf, const int sendcounts[],
const int sdispls[], const MPI::Datatype & sendtype,
void *recvbuf, const int recvcounts[],
const int rdispls[],
const MPI::Datatype & recvtype) const
{
(void)MPI_Alltoallv(const_cast<void *>(sendbuf),
const_cast<int *>(sendcounts),
const_cast<int *>(sdispls), sendtype, recvbuf,
const_cast<int *>(recvcounts),
const_cast<int *>(rdispls),
recvtype,mpi_comm);
}
inline void
MPI::Comm::Alltoallw(const void *sendbuf, const int sendcounts[],
const int sdispls[], const MPI::Datatype sendtypes[],
void *recvbuf, const int recvcounts[],
const int rdispls[],
const MPI::Datatype recvtypes[]) const
{
const int comm_size = Get_size();
MPI_Datatype *const data_type_tbl = new MPI_Datatype [2*comm_size];
// This must be done because MPI::Datatype arrays cannot be
// converted directly into MPI_Datatype arrays.
for (int i_rank=0; i_rank < comm_size; i_rank++) {
data_type_tbl[i_rank] = sendtypes[i_rank];
data_type_tbl[i_rank + comm_size] = recvtypes[i_rank];
}
(void)MPI_Alltoallw(const_cast<void *>(sendbuf),
const_cast<int *>(sendcounts),
const_cast<int *>(sdispls),
data_type_tbl, recvbuf,
const_cast<int *>(recvcounts),
const_cast<int *>(rdispls),
&data_type_tbl[comm_size], mpi_comm);
delete[] data_type_tbl;
}
inline void
MPI::Comm::Reduce(const void *sendbuf, void *recvbuf, int count,
const MPI::Datatype & datatype, const MPI::Op& op,
int root) const
{
(void)MPI_Reduce(const_cast<void *>(sendbuf), recvbuf, count, datatype, op, root, mpi_comm);
}
inline void
MPI::Comm::Allreduce(const void *sendbuf, void *recvbuf, int count,
const MPI::Datatype & datatype, const MPI::Op& op) const
{
(void)MPI_Allreduce (const_cast<void *>(sendbuf), recvbuf, count, datatype, op, mpi_comm);
}
inline void
MPI::Comm::Reduce_scatter(const void *sendbuf, void *recvbuf,
int recvcounts[],
const MPI::Datatype & datatype,
const MPI::Op& op) const
{
(void)MPI_Reduce_scatter(const_cast<void *>(sendbuf), recvbuf, recvcounts,
datatype, op, mpi_comm);
}
//
// Process Creation and Managemnt
//
inline void
MPI::Comm::Disconnect()
{
(void) MPI_Comm_disconnect(&mpi_comm);
}
inline MPI::Intercomm
MPI::Comm::Get_parent()
{
MPI_Comm parent;
MPI_Comm_get_parent(&parent);
return parent;
}
inline MPI::Intercomm
MPI::Comm::Join(const int fd)
{
MPI_Comm newcomm;
(void) MPI_Comm_join((int) fd, &newcomm);
return newcomm;
}
//
// External Interfaces
//
inline void
MPI::Comm::Get_name(char* comm_name, int& resultlen) const
{
(void) MPI_Comm_get_name(mpi_comm, comm_name, &resultlen);
}
inline void
MPI::Comm::Set_name(const char* comm_name)
{
(void) MPI_Comm_set_name(mpi_comm, const_cast<char *>(comm_name));
}
//
//Process Topologies
//
inline int
MPI::Comm::Get_topology() const
{
int status;
(void)MPI_Topo_test(mpi_comm, &status);
return status;
}
//
// Environmental Inquiry
//
inline void
MPI::Comm::Abort(int errorcode)
{
(void)MPI_Abort(mpi_comm, errorcode);
}
//
// These C++ bindings are for MPI-2.
// The MPI-1.2 functions called below are all
// going to be deprecated and replaced in MPI-2.
//
inline MPI::Errhandler
MPI::Comm::Get_errhandler() const
{
MPI_Errhandler errhandler;
MPI_Comm_get_errhandler(mpi_comm, &errhandler);
return errhandler;
}
inline void
MPI::Comm::Set_errhandler(const MPI::Errhandler& errhandler)
{
(void)MPI_Comm_set_errhandler(mpi_comm, errhandler);
}
inline void
MPI::Comm::Call_errhandler(int errorcode) const
{
(void) MPI_Comm_call_errhandler(mpi_comm, errorcode);
}
// 1) original Create_keyval that takes the first 2 arguments as C++
// functions
inline int
MPI::Comm::Create_keyval(MPI::Comm::Copy_attr_function* comm_copy_attr_fn,
MPI::Comm::Delete_attr_function* comm_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, NULL,
comm_copy_attr_fn, comm_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 2) overload Create_keyval to take the first 2 arguments as C
// functions
inline int
MPI::Comm::Create_keyval(MPI_Comm_copy_attr_function* comm_copy_attr_fn,
MPI_Comm_delete_attr_function* comm_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn,
NULL, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 3) overload Create_keyval to take the first 2 arguments as C++ & C
// functions
inline int
MPI::Comm::Create_keyval(MPI::Comm::Copy_attr_function* comm_copy_attr_fn,
MPI_Comm_delete_attr_function* comm_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, comm_delete_attr_fn,
comm_copy_attr_fn, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 4) overload Create_keyval to take the first 2 arguments as C & C++
// functions
inline int
MPI::Comm::Create_keyval(MPI_Comm_copy_attr_function* comm_copy_attr_fn,
MPI::Comm::Delete_attr_function* comm_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(comm_copy_attr_fn, NULL,
NULL, comm_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
inline void
MPI::Comm::Free_keyval(int& comm_keyval)
{
(void) MPI_Comm_free_keyval(&comm_keyval);
}
inline void
MPI::Comm::Set_attr(int comm_keyval, const void* attribute_val) const
{
(void)MPI_Comm_set_attr(mpi_comm, comm_keyval, const_cast<void*>(attribute_val));
}
inline bool
MPI::Comm::Get_attr(int comm_keyval, void* attribute_val) const
{
int flag;
(void)MPI_Comm_get_attr(mpi_comm, comm_keyval, attribute_val, &flag);
return OPAL_INT_TO_BOOL(flag);
}
inline void
MPI::Comm::Delete_attr(int comm_keyval)
{
(void)MPI_Comm_delete_attr(mpi_comm, comm_keyval);
}
// Comment out the unused parameters so that compilers don't warn
// about them. Use comments instead of just deleting the param names
// outright so that we know/remember what they are.
inline int
MPI::Comm::NULL_COPY_FN(const MPI::Comm& /* oldcomm */,
int /* comm_keyval */,
void* /* extra_state */,
void* /* attribute_val_in */,
void* /* attribute_val_out */,
bool& flag)
{
flag = false;
return MPI_SUCCESS;
}
inline int
MPI::Comm::DUP_FN(const MPI::Comm& oldcomm, int comm_keyval,
void* extra_state, void* attribute_val_in,
void* attribute_val_out, bool& flag)
{
if (sizeof(bool) != sizeof(int)) {
int f = (int)flag;
int ret;
ret = MPI_COMM_DUP_FN(oldcomm, comm_keyval, extra_state,
attribute_val_in, attribute_val_out, &f);
flag = OPAL_INT_TO_BOOL(f);
return ret;
} else {
return MPI_COMM_DUP_FN(oldcomm, comm_keyval, extra_state,
attribute_val_in, attribute_val_out,
(int*)&flag);
}
}
// Comment out the unused parameters so that compilers don't warn
// about them. Use comments instead of just deleting the param names
// outright so that we know/remember what they are.
inline int
MPI::Comm::NULL_DELETE_FN(MPI::Comm& /* comm */,
int /* comm_keyval */,
void* /* attribute_val */,
void* /* extra_state */)
{
return MPI_SUCCESS;
}

Просмотреть файл

@ -1,293 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2008-2009 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// Copyright (c) 2017 Research Organization for Information Science
// and Technology (RIST). All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// return codes
static const int SUCCESS = MPI_SUCCESS;
static const int ERR_BUFFER = MPI_ERR_BUFFER;
static const int ERR_COUNT = MPI_ERR_COUNT;
static const int ERR_TYPE = MPI_ERR_TYPE;
static const int ERR_TAG = MPI_ERR_TAG ;
static const int ERR_COMM = MPI_ERR_COMM;
static const int ERR_RANK = MPI_ERR_RANK;
static const int ERR_REQUEST = MPI_ERR_REQUEST;
static const int ERR_ROOT = MPI_ERR_ROOT;
static const int ERR_GROUP = MPI_ERR_GROUP;
static const int ERR_OP = MPI_ERR_OP;
static const int ERR_TOPOLOGY = MPI_ERR_TOPOLOGY;
static const int ERR_DIMS = MPI_ERR_DIMS;
static const int ERR_ARG = MPI_ERR_ARG;
static const int ERR_UNKNOWN = MPI_ERR_UNKNOWN;
static const int ERR_TRUNCATE = MPI_ERR_TRUNCATE;
static const int ERR_OTHER = MPI_ERR_OTHER;
static const int ERR_INTERN = MPI_ERR_INTERN;
static const int ERR_PENDING = MPI_ERR_PENDING;
static const int ERR_IN_STATUS = MPI_ERR_IN_STATUS;
static const int ERR_ACCESS = MPI_ERR_ACCESS;
static const int ERR_AMODE = MPI_ERR_AMODE;
static const int ERR_ASSERT = MPI_ERR_ASSERT;
static const int ERR_BAD_FILE = MPI_ERR_BAD_FILE;
static const int ERR_BASE = MPI_ERR_BASE;
static const int ERR_CONVERSION = MPI_ERR_CONVERSION;
static const int ERR_DISP = MPI_ERR_DISP;
static const int ERR_DUP_DATAREP = MPI_ERR_DUP_DATAREP;
static const int ERR_FILE_EXISTS = MPI_ERR_FILE_EXISTS;
static const int ERR_FILE_IN_USE = MPI_ERR_FILE_IN_USE;
static const int ERR_FILE = MPI_ERR_FILE;
static const int ERR_INFO_KEY = MPI_ERR_INFO_KEY;
static const int ERR_INFO_NOKEY = MPI_ERR_INFO_NOKEY;
static const int ERR_INFO_VALUE = MPI_ERR_INFO_VALUE;
static const int ERR_INFO = MPI_ERR_INFO;
static const int ERR_IO = MPI_ERR_IO;
static const int ERR_KEYVAL = MPI_ERR_KEYVAL;
static const int ERR_LOCKTYPE = MPI_ERR_LOCKTYPE;
static const int ERR_NAME = MPI_ERR_NAME;
static const int ERR_NO_MEM = MPI_ERR_NO_MEM;
static const int ERR_NOT_SAME = MPI_ERR_NOT_SAME;
static const int ERR_NO_SPACE = MPI_ERR_NO_SPACE;
static const int ERR_NO_SUCH_FILE = MPI_ERR_NO_SUCH_FILE;
static const int ERR_PORT = MPI_ERR_PORT;
static const int ERR_QUOTA = MPI_ERR_QUOTA;
static const int ERR_READ_ONLY = MPI_ERR_READ_ONLY;
static const int ERR_RMA_CONFLICT = MPI_ERR_RMA_CONFLICT;
static const int ERR_RMA_SYNC = MPI_ERR_RMA_SYNC;
static const int ERR_SERVICE = MPI_ERR_SERVICE;
static const int ERR_SIZE = MPI_ERR_SIZE;
static const int ERR_SPAWN = MPI_ERR_SPAWN;
static const int ERR_UNSUPPORTED_DATAREP = MPI_ERR_UNSUPPORTED_DATAREP;
static const int ERR_UNSUPPORTED_OPERATION = MPI_ERR_UNSUPPORTED_OPERATION;
static const int ERR_WIN = MPI_ERR_WIN;
static const int ERR_LASTCODE = MPI_ERR_LASTCODE;
// assorted constants
OMPI_DECLSPEC extern void* const BOTTOM;
OMPI_DECLSPEC extern void* const IN_PLACE;
static const int PROC_NULL = MPI_PROC_NULL;
static const int ANY_SOURCE = MPI_ANY_SOURCE;
static const int ROOT = MPI_ROOT;
static const int ANY_TAG = MPI_ANY_TAG;
static const int UNDEFINED = MPI_UNDEFINED;
static const int BSEND_OVERHEAD = MPI_BSEND_OVERHEAD;
static const int KEYVAL_INVALID = MPI_KEYVAL_INVALID;
static const int ORDER_C = MPI_ORDER_C;
static const int ORDER_FORTRAN = MPI_ORDER_FORTRAN;
static const int DISTRIBUTE_BLOCK = MPI_DISTRIBUTE_BLOCK;
static const int DISTRIBUTE_CYCLIC = MPI_DISTRIBUTE_CYCLIC;
static const int DISTRIBUTE_NONE = MPI_DISTRIBUTE_NONE;
static const int DISTRIBUTE_DFLT_DARG = MPI_DISTRIBUTE_DFLT_DARG;
// error-handling specifiers
OMPI_DECLSPEC extern const Errhandler ERRORS_ARE_FATAL;
OMPI_DECLSPEC extern const Errhandler ERRORS_RETURN;
OMPI_DECLSPEC extern const Errhandler ERRORS_THROW_EXCEPTIONS;
// typeclass definitions for MPI_Type_match_size
static const int TYPECLASS_INTEGER = MPI_TYPECLASS_INTEGER;
static const int TYPECLASS_REAL = MPI_TYPECLASS_REAL;
static const int TYPECLASS_COMPLEX = MPI_TYPECLASS_COMPLEX;
// maximum sizes for strings
static const int MAX_PROCESSOR_NAME = MPI_MAX_PROCESSOR_NAME;
static const int MAX_ERROR_STRING = MPI_MAX_ERROR_STRING;
static const int MAX_INFO_KEY = MPI_MAX_INFO_KEY;
static const int MAX_INFO_VAL = MPI_MAX_INFO_VAL;
static const int MAX_PORT_NAME = MPI_MAX_PORT_NAME;
static const int MAX_OBJECT_NAME = MPI_MAX_OBJECT_NAME;
// elementary datatypes (C / C++)
OMPI_DECLSPEC extern const Datatype CHAR;
OMPI_DECLSPEC extern const Datatype SHORT;
OMPI_DECLSPEC extern const Datatype INT;
OMPI_DECLSPEC extern const Datatype LONG;
OMPI_DECLSPEC extern const Datatype SIGNED_CHAR;
OMPI_DECLSPEC extern const Datatype UNSIGNED_CHAR;
OMPI_DECLSPEC extern const Datatype UNSIGNED_SHORT;
OMPI_DECLSPEC extern const Datatype UNSIGNED;
OMPI_DECLSPEC extern const Datatype UNSIGNED_LONG;
OMPI_DECLSPEC extern const Datatype FLOAT;
OMPI_DECLSPEC extern const Datatype DOUBLE;
OMPI_DECLSPEC extern const Datatype LONG_DOUBLE;
OMPI_DECLSPEC extern const Datatype BYTE;
OMPI_DECLSPEC extern const Datatype PACKED;
OMPI_DECLSPEC extern const Datatype WCHAR;
// datatypes for reductions functions (C / C++)
OMPI_DECLSPEC extern const Datatype FLOAT_INT;
OMPI_DECLSPEC extern const Datatype DOUBLE_INT;
OMPI_DECLSPEC extern const Datatype LONG_INT;
OMPI_DECLSPEC extern const Datatype TWOINT;
OMPI_DECLSPEC extern const Datatype SHORT_INT;
OMPI_DECLSPEC extern const Datatype LONG_DOUBLE_INT;
// elementary datatype (Fortran)
OMPI_DECLSPEC extern const Datatype INTEGER;
OMPI_DECLSPEC extern const Datatype REAL;
OMPI_DECLSPEC extern const Datatype DOUBLE_PRECISION;
OMPI_DECLSPEC extern const Datatype F_COMPLEX;
OMPI_DECLSPEC extern const Datatype LOGICAL;
OMPI_DECLSPEC extern const Datatype CHARACTER;
// datatype for reduction functions (Fortran)
OMPI_DECLSPEC extern const Datatype TWOREAL;
OMPI_DECLSPEC extern const Datatype TWODOUBLE_PRECISION;
OMPI_DECLSPEC extern const Datatype TWOINTEGER;
// optional datatypes (Fortran)
OMPI_DECLSPEC extern const Datatype INTEGER1;
OMPI_DECLSPEC extern const Datatype INTEGER2;
OMPI_DECLSPEC extern const Datatype INTEGER4;
OMPI_DECLSPEC extern const Datatype REAL2;
OMPI_DECLSPEC extern const Datatype REAL4;
OMPI_DECLSPEC extern const Datatype REAL8;
// optional datatype (C / C++)
OMPI_DECLSPEC extern const Datatype LONG_LONG;
OMPI_DECLSPEC extern const Datatype LONG_LONG_INT;
OMPI_DECLSPEC extern const Datatype UNSIGNED_LONG_LONG;
// c++ types
OMPI_DECLSPEC extern const Datatype BOOL;
OMPI_DECLSPEC extern const Datatype COMPLEX;
OMPI_DECLSPEC extern const Datatype DOUBLE_COMPLEX;
OMPI_DECLSPEC extern const Datatype F_DOUBLE_COMPLEX;
OMPI_DECLSPEC extern const Datatype LONG_DOUBLE_COMPLEX;
// special datatypes for contstruction of derived datatypes
OMPI_DECLSPEC extern const Datatype UB;
OMPI_DECLSPEC extern const Datatype LB;
// datatype decoding constants
static const int COMBINER_NAMED = MPI_COMBINER_NAMED;
static const int COMBINER_DUP = MPI_COMBINER_DUP;
static const int COMBINER_CONTIGUOUS = MPI_COMBINER_CONTIGUOUS;
static const int COMBINER_VECTOR = MPI_COMBINER_VECTOR;
static const int COMBINER_HVECTOR_INTEGER = MPI_COMBINER_HVECTOR_INTEGER;
static const int COMBINER_HVECTOR = MPI_COMBINER_HVECTOR;
static const int COMBINER_INDEXED = MPI_COMBINER_INDEXED;
static const int COMBINER_HINDEXED_INTEGER = MPI_COMBINER_HINDEXED_INTEGER;
static const int COMBINER_HINDEXED = MPI_COMBINER_HINDEXED;
static const int COMBINER_INDEXED_BLOCK = MPI_COMBINER_INDEXED_BLOCK;
static const int COMBINER_STRUCT_INTEGER = MPI_COMBINER_STRUCT_INTEGER;
static const int COMBINER_STRUCT = MPI_COMBINER_STRUCT;
static const int COMBINER_SUBARRAY = MPI_COMBINER_SUBARRAY;
static const int COMBINER_DARRAY = MPI_COMBINER_DARRAY;
static const int COMBINER_F90_REAL = MPI_COMBINER_F90_REAL;
static const int COMBINER_F90_COMPLEX = MPI_COMBINER_F90_COMPLEX;
static const int COMBINER_F90_INTEGER = MPI_COMBINER_F90_INTEGER;
static const int COMBINER_RESIZED = MPI_COMBINER_RESIZED;
// thread constants
static const int THREAD_SINGLE = MPI_THREAD_SINGLE;
static const int THREAD_FUNNELED = MPI_THREAD_FUNNELED;
static const int THREAD_SERIALIZED = MPI_THREAD_SERIALIZED;
static const int THREAD_MULTIPLE = MPI_THREAD_MULTIPLE;
// reserved communicators
// JGS these can not be const because Set_errhandler is not const
OMPI_DECLSPEC extern Intracomm COMM_WORLD;
OMPI_DECLSPEC extern Intracomm COMM_SELF;
// results of communicator and group comparisons
static const int IDENT = MPI_IDENT;
static const int CONGRUENT = MPI_CONGRUENT;
static const int SIMILAR = MPI_SIMILAR;
static const int UNEQUAL = MPI_UNEQUAL;
// environmental inquiry keys
static const int TAG_UB = MPI_TAG_UB;
static const int HOST = MPI_HOST;
static const int IO = MPI_IO;
static const int WTIME_IS_GLOBAL = MPI_WTIME_IS_GLOBAL;
static const int APPNUM = MPI_APPNUM;
static const int LASTUSEDCODE = MPI_LASTUSEDCODE;
static const int UNIVERSE_SIZE = MPI_UNIVERSE_SIZE;
static const int WIN_BASE = MPI_WIN_BASE;
static const int WIN_SIZE = MPI_WIN_SIZE;
static const int WIN_DISP_UNIT = MPI_WIN_DISP_UNIT;
// collective operations
OMPI_DECLSPEC extern const Op MAX;
OMPI_DECLSPEC extern const Op MIN;
OMPI_DECLSPEC extern const Op SUM;
OMPI_DECLSPEC extern const Op PROD;
OMPI_DECLSPEC extern const Op MAXLOC;
OMPI_DECLSPEC extern const Op MINLOC;
OMPI_DECLSPEC extern const Op BAND;
OMPI_DECLSPEC extern const Op BOR;
OMPI_DECLSPEC extern const Op BXOR;
OMPI_DECLSPEC extern const Op LAND;
OMPI_DECLSPEC extern const Op LOR;
OMPI_DECLSPEC extern const Op LXOR;
OMPI_DECLSPEC extern const Op REPLACE;
// null handles
OMPI_DECLSPEC extern const Group GROUP_NULL;
OMPI_DECLSPEC extern const Win WIN_NULL;
OMPI_DECLSPEC extern const Info INFO_NULL;
OMPI_DECLSPEC extern Comm_Null COMM_NULL;
OMPI_DECLSPEC extern const Datatype DATATYPE_NULL;
OMPI_DECLSPEC extern Request REQUEST_NULL;
OMPI_DECLSPEC extern const Op OP_NULL;
OMPI_DECLSPEC extern const Errhandler ERRHANDLER_NULL;
OMPI_DECLSPEC extern const File FILE_NULL;
// constants specifying empty or ignored input
OMPI_DECLSPEC extern const char** ARGV_NULL;
OMPI_DECLSPEC extern const char*** ARGVS_NULL;
// empty group
OMPI_DECLSPEC extern const Group GROUP_EMPTY;
// topologies
static const int GRAPH = MPI_GRAPH;
static const int CART = MPI_CART;
// MPI-2 IO
static const int MODE_CREATE = MPI_MODE_CREATE;
static const int MODE_RDONLY = MPI_MODE_RDONLY;
static const int MODE_WRONLY = MPI_MODE_WRONLY;
static const int MODE_RDWR = MPI_MODE_RDWR;
static const int MODE_DELETE_ON_CLOSE = MPI_MODE_DELETE_ON_CLOSE;
static const int MODE_UNIQUE_OPEN = MPI_MODE_UNIQUE_OPEN;
static const int MODE_EXCL = MPI_MODE_EXCL;
static const int MODE_APPEND = MPI_MODE_APPEND;
static const int MODE_SEQUENTIAL = MPI_MODE_SEQUENTIAL;
static const int DISPLACEMENT_CURRENT = MPI_DISPLACEMENT_CURRENT;
#if !defined(OMPI_IGNORE_CXX_SEEK) && OMPI_WANT_MPI_CXX_SEEK
static const int SEEK_SET = ::SEEK_SET;
static const int SEEK_CUR = ::SEEK_CUR;
static const int SEEK_END = ::SEEK_END;
#endif
static const int MAX_DATAREP_STRING = MPI_MAX_DATAREP_STRING;
// one-sided constants
static const int MODE_NOCHECK = MPI_MODE_NOCHECK;
static const int MODE_NOPRECEDE = MPI_MODE_NOPRECEDE;
static const int MODE_NOPUT = MPI_MODE_NOPUT;
static const int MODE_NOSTORE = MPI_MODE_NOSTORE;
static const int MODE_NOSUCCEED = MPI_MODE_NOSUCCEED;
static const int LOCK_EXCLUSIVE = MPI_LOCK_EXCLUSIVE;
static const int LOCK_SHARED = MPI_LOCK_SHARED;

Просмотреть файл

@ -1,152 +0,0 @@
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
/*
* Copyright (c) 2016 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2016-2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#include "ompi_config.h"
#include "ompi/communicator/communicator.h"
#include "ompi/attribute/attribute.h"
#include "ompi/errhandler/errhandler.h"
#include "ompi/file/file.h"
#include "opal/class/opal_list.h"
#include "cxx_glue.h"
typedef struct ompi_cxx_intercept_file_extra_state_item_t {
opal_list_item_t super;
ompi_cxx_intercept_file_extra_state_t state;
} ompi_cxx_intercept_file_extra_state_item_t;
OBJ_CLASS_DECLARATION(ompi_cxx_intercept_file_extra_state_item_t);
OBJ_CLASS_INSTANCE(ompi_cxx_intercept_file_extra_state_item_t, opal_list_item_t,
NULL, NULL);
ompi_cxx_communicator_type_t ompi_cxx_comm_get_type (MPI_Comm comm)
{
if (OMPI_COMM_IS_GRAPH(comm)) {
return OMPI_CXX_COMM_TYPE_GRAPH;
} else if (OMPI_COMM_IS_CART(comm)) {
return OMPI_CXX_COMM_TYPE_CART;
} else if (OMPI_COMM_IS_INTRA(comm)) {
return OMPI_CXX_COMM_TYPE_INTRACOMM;
} else if (OMPI_COMM_IS_INTER(comm)) {
return OMPI_CXX_COMM_TYPE_INTERCOMM;
}
return OMPI_CXX_COMM_TYPE_UNKNOWN;
}
int ompi_cxx_errhandler_invoke_comm (MPI_Comm comm, int ret, const char *message)
{
return OMPI_ERRHANDLER_INVOKE (comm, ret, message);
}
int ompi_cxx_errhandler_invoke_file (MPI_File file, int ret, const char *message)
{
return OMPI_ERRHANDLER_INVOKE (file, ret, message);
}
int ompi_cxx_attr_create_keyval_comm (MPI_Comm_copy_attr_function *copy_fn,
MPI_Comm_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state)
{
ompi_attribute_fn_ptr_union_t copy_fn_u = {.attr_communicator_copy_fn =
(MPI_Comm_internal_copy_attr_function *) copy_fn};
ompi_attribute_fn_ptr_union_t delete_fn_u = {.attr_communicator_delete_fn =
(MPI_Comm_delete_attr_function *) delete_fn};
return ompi_attr_create_keyval (COMM_ATTR, copy_fn_u, delete_fn_u, keyval, extra_state, 0, bindings_extra_state);
}
int ompi_cxx_attr_create_keyval_win (MPI_Win_copy_attr_function *copy_fn,
MPI_Win_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state)
{
ompi_attribute_fn_ptr_union_t copy_fn_u = {.attr_win_copy_fn =
(MPI_Win_internal_copy_attr_function *) copy_fn};
ompi_attribute_fn_ptr_union_t delete_fn_u = {.attr_win_delete_fn =
(MPI_Win_delete_attr_function *) delete_fn};
return ompi_attr_create_keyval (WIN_ATTR, copy_fn_u, delete_fn_u, keyval, extra_state, 0, NULL);
}
int ompi_cxx_attr_create_keyval_type (MPI_Type_copy_attr_function *copy_fn,
MPI_Type_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state)
{
ompi_attribute_fn_ptr_union_t copy_fn_u = {.attr_datatype_copy_fn =
(MPI_Type_internal_copy_attr_function *) copy_fn};
ompi_attribute_fn_ptr_union_t delete_fn_u = {.attr_datatype_delete_fn =
(MPI_Type_delete_attr_function *) delete_fn};
return ompi_attr_create_keyval (TYPE_ATTR, copy_fn_u, delete_fn_u, keyval, extra_state, 0, NULL);
}
MPI_Errhandler ompi_cxx_errhandler_create_comm (ompi_cxx_dummy_fn_t *fn)
{
ompi_errhandler_t *errhandler;
errhandler = ompi_errhandler_create(OMPI_ERRHANDLER_TYPE_COMM,
(ompi_errhandler_generic_handler_fn_t *) fn,
OMPI_ERRHANDLER_LANG_CXX);
errhandler->eh_cxx_dispatch_fn =
(ompi_errhandler_cxx_dispatch_fn_t *) ompi_mpi_cxx_comm_errhandler_invoke;
return errhandler;
}
MPI_Errhandler ompi_cxx_errhandler_create_win (ompi_cxx_dummy_fn_t *fn)
{
ompi_errhandler_t *errhandler;
errhandler = ompi_errhandler_create(OMPI_ERRHANDLER_TYPE_WIN,
(ompi_errhandler_generic_handler_fn_t *) fn,
OMPI_ERRHANDLER_LANG_CXX);
errhandler->eh_cxx_dispatch_fn =
(ompi_errhandler_cxx_dispatch_fn_t *) ompi_mpi_cxx_win_errhandler_invoke;
return errhandler;
}
MPI_Errhandler ompi_cxx_errhandler_create_file (ompi_cxx_dummy_fn_t *fn)
{
ompi_errhandler_t *errhandler;
errhandler = ompi_errhandler_create(OMPI_ERRHANDLER_TYPE_FILE,
(ompi_errhandler_generic_handler_fn_t *) fn,
OMPI_ERRHANDLER_LANG_CXX);
errhandler->eh_cxx_dispatch_fn =
(ompi_errhandler_cxx_dispatch_fn_t *) ompi_mpi_cxx_file_errhandler_invoke;
return errhandler;
}
ompi_cxx_intercept_file_extra_state_t
*ompi_cxx_new_intercept_state (void *read_fn_cxx, void *write_fn_cxx, void *extent_fn_cxx,
void *extra_state_cxx)
{
ompi_cxx_intercept_file_extra_state_item_t *intercept;
intercept = OBJ_NEW(ompi_cxx_intercept_file_extra_state_item_t);
if (NULL == intercept) {
return NULL;
}
opal_list_append(&ompi_registered_datareps, &intercept->super);
intercept->state.read_fn_cxx = read_fn_cxx;
intercept->state.write_fn_cxx = write_fn_cxx;
intercept->state.extent_fn_cxx = extent_fn_cxx;
intercept->state.extra_state_cxx = extra_state_cxx;
return &intercept->state;
}
void ompi_cxx_errhandler_set_callbacks (struct ompi_errhandler_t *errhandler, MPI_Comm_errhandler_function *eh_comm_fn,
ompi_file_errhandler_function *eh_file_fn, MPI_Win_errhandler_function *eh_win_fn)
{
errhandler->eh_comm_fn = eh_comm_fn;
errhandler->eh_file_fn = eh_file_fn;
errhandler->eh_win_fn = eh_win_fn;
}

Просмотреть файл

@ -1,88 +0,0 @@
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
/*
* Copyright (c) 2016-2017 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2016-2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#if !defined(OMPI_CXX_COMM_GLUE_H)
#define OMPI_CXX_COMM_GLUE_H
#include "ompi_config.h"
#include <stdlib.h>
#include "mpi.h"
#if defined(c_plusplus) || defined(__cplusplus)
extern "C" {
#endif
typedef struct ompi_cxx_intercept_file_extra_state_t {
void *read_fn_cxx;
void *write_fn_cxx;
void *extent_fn_cxx;
void *extra_state_cxx;
} ompi_cxx_intercept_file_extra_state_t;
enum ompi_cxx_communicator_type_t {
OMPI_CXX_COMM_TYPE_UNKNOWN,
OMPI_CXX_COMM_TYPE_INTRACOMM,
OMPI_CXX_COMM_TYPE_INTERCOMM,
OMPI_CXX_COMM_TYPE_CART,
OMPI_CXX_COMM_TYPE_GRAPH,
};
typedef enum ompi_cxx_communicator_type_t ompi_cxx_communicator_type_t;
/* need to declare this error handler here */
struct ompi_predefined_errhandler_t;
extern struct ompi_predefined_errhandler_t ompi_mpi_errors_throw_exceptions;
/**
* C++ invocation function signature
*/
typedef void (ompi_cxx_dummy_fn_t) (void);
ompi_cxx_communicator_type_t ompi_cxx_comm_get_type (MPI_Comm comm);
int ompi_cxx_errhandler_invoke_comm (MPI_Comm comm, int ret, const char *message);
int ompi_cxx_attr_create_keyval_comm (MPI_Comm_copy_attr_function *copy_fn,
MPI_Comm_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state);
int ompi_cxx_attr_create_keyval_win (MPI_Win_copy_attr_function *copy_fn,
MPI_Win_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state);
int ompi_cxx_attr_create_keyval_type (MPI_Type_copy_attr_function *copy_fn,
MPI_Type_delete_attr_function* delete_fn, int *keyval, void *extra_state,
int flags, void *bindings_extra_state);
void ompi_mpi_cxx_comm_errhandler_invoke (MPI_Comm *mpi_comm, int *err,
const char *message, void *comm_fn);
void ompi_mpi_cxx_win_errhandler_invoke (MPI_Win *mpi_comm, int *err,
const char *message, void *win_fn);
int ompi_cxx_errhandler_invoke_file (MPI_File file, int ret, const char *message);
void ompi_mpi_cxx_file_errhandler_invoke (MPI_File *mpi_comm, int *err,
const char *message, void *file_fn);
MPI_Errhandler ompi_cxx_errhandler_create_comm (ompi_cxx_dummy_fn_t *fn);
MPI_Errhandler ompi_cxx_errhandler_create_win (ompi_cxx_dummy_fn_t *fn);
MPI_Errhandler ompi_cxx_errhandler_create_file (ompi_cxx_dummy_fn_t *fn);
ompi_cxx_intercept_file_extra_state_t
*ompi_cxx_new_intercept_state (void *read_fn_cxx, void *write_fn_cxx, void *extent_fn_cxx,
void *extra_state_cxx);
void ompi_cxx_errhandler_set_callbacks (struct ompi_errhandler_t *errhandler, MPI_Comm_errhandler_function *eh_comm_fn,
ompi_file_errhandler_function *eh_file_fn, MPI_Win_errhandler_function *eh_win_fn);
#if defined(c_plusplus) || defined(__cplusplus)
}
#endif
#endif /* OMPI_CXX_COMM_GLUE_H */

Просмотреть файл

@ -1,103 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2006-2016 Los Alamos National Security, LLC. All rights
// reserved.
// Copyright (c) 2007-2008 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2007-2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// do not include ompi_config.h because it kills the free/malloc defines
#include "mpi.h"
#include "ompi/mpi/cxx/mpicxx.h"
#include "ompi/constants.h"
#include "cxx_glue.h"
void
MPI::Datatype::Free()
{
(void)MPI_Type_free(&mpi_datatype);
}
int
MPI::Datatype::do_create_keyval(MPI_Type_copy_attr_function* c_copy_fn,
MPI_Type_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval)
{
int ret, count = 0;
keyval_intercept_data_t *cxx_extra_state;
// If both the callbacks are C, then do the simple thing -- no
// need for all the C++ machinery.
if (NULL != c_copy_fn && NULL != c_delete_fn) {
ret = ompi_cxx_attr_create_keyval_type (c_copy_fn, c_delete_fn, &keyval,
extra_state, 0, NULL);
if (MPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, ret,
"MPI::Datatype::Create_keyval");
}
}
// If either callback is C++, then we have to use the C++
// callbacks for both, because we have to generate a new
// extra_state. And since we only get one extra_state (i.e., we
// don't get one extra_state for the copy callback and another
// extra_state for the delete callback), we have to use the C++
// callbacks for both (and therefore translate the C++-special
// extra_state into the user's original extra_state).
cxx_extra_state = (keyval_intercept_data_t *) malloc(sizeof(*cxx_extra_state));
if (NULL == cxx_extra_state) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_NO_MEM,
"MPI::Datatype::Create_keyval");
}
cxx_extra_state->c_copy_fn = c_copy_fn;
cxx_extra_state->cxx_copy_fn = cxx_copy_fn;
cxx_extra_state->c_delete_fn = c_delete_fn;
cxx_extra_state->cxx_delete_fn = cxx_delete_fn;
cxx_extra_state->extra_state = extra_state;
// Error check. Must have exactly 2 non-NULL function pointers.
if (NULL != c_copy_fn) {
++count;
}
if (NULL != c_delete_fn) {
++count;
}
if (NULL != cxx_copy_fn) {
++count;
}
if (NULL != cxx_delete_fn) {
++count;
}
if (2 != count) {
free(cxx_extra_state);
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_ARG,
"MPI::Datatype::Create_keyval");
}
// We do not call MPI_Datatype_create_keyval() here because we need to
// pass in a special destructor to the backend keyval creation
// that gets invoked when the keyval's reference count goes to 0
// and is finally destroyed (i.e., clean up some caching/lookup
// data here in the C++ bindings layer). This destructor is
// *only* used in the C++ bindings, so it's not set by the C
// MPI_Comm_create_keyval(). Hence, we do all the work here (and
// ensure to set the destructor atomicly when the keyval is
// created).
ret = ompi_cxx_attr_create_keyval_type ((MPI_Type_copy_attr_function *) ompi_mpi_cxx_type_copy_attr_intercept,
ompi_mpi_cxx_type_delete_attr_intercept, &keyval,
cxx_extra_state, 0, NULL);
if (OMPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, ret,
"MPI::Datatype::Create_keyval");
}
return MPI_SUCCESS;
}

Просмотреть файл

@ -1,258 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2006-2007 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Datatype {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Datatype;
#endif
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction
inline Datatype() { }
// inter-language operability
inline Datatype(MPI_Datatype i) : pmpi_datatype(i) { }
// copy / assignment
inline Datatype(const Datatype& dt) : pmpi_datatype(dt.pmpi_datatype) { }
inline Datatype(const PMPI::Datatype& dt) : pmpi_datatype(dt) { }
inline virtual ~Datatype() {}
inline Datatype& operator=(const Datatype& dt) {
pmpi_datatype = dt.pmpi_datatype; return *this; }
// comparison
inline bool operator== (const Datatype &a) const
{ return (bool) (pmpi_datatype == a.pmpi_datatype); }
inline bool operator!= (const Datatype &a) const
{ return (bool) !(*this == a); }
// inter-language operability
inline Datatype& operator= (const MPI_Datatype &i)
{ pmpi_datatype = i; return *this; }
inline operator MPI_Datatype() const { return (MPI_Datatype)pmpi_datatype; }
// inline operator MPI_Datatype* ()/* JGS const */ { return pmpi_datatype; }
inline operator const PMPI::Datatype&() const { return pmpi_datatype; }
inline const PMPI::Datatype& pmpi() const { return pmpi_datatype; }
#else
// construction / destruction
inline Datatype() : mpi_datatype(MPI_DATATYPE_NULL) { }
inline virtual ~Datatype() {}
// inter-language operability
inline Datatype(MPI_Datatype i) : mpi_datatype(i) { }
// copy / assignment
inline Datatype(const Datatype& dt) : mpi_datatype(dt.mpi_datatype) { }
inline Datatype& operator=(const Datatype& dt) {
mpi_datatype = dt.mpi_datatype; return *this; }
// comparison
inline bool operator== (const Datatype &a) const
{ return (bool) (mpi_datatype == a.mpi_datatype); }
inline bool operator!= (const Datatype &a) const
{ return (bool) !(*this == a); }
// inter-language operability
inline Datatype& operator= (const MPI_Datatype &i)
{ mpi_datatype = i; return *this; }
inline operator MPI_Datatype () const { return mpi_datatype; }
// inline operator MPI_Datatype* ()/* JGS const */ { return &mpi_datatype; }
#endif
//
// User Defined Functions
//
typedef int Copy_attr_function(const Datatype& oldtype,
int type_keyval,
void* extra_state,
const void* attribute_val_in,
void* attribute_val_out,
bool& flag);
typedef int Delete_attr_function(Datatype& type, int type_keyval,
void* attribute_val, void* extra_state);
//
// Point-to-Point Communication
//
virtual Datatype Create_contiguous(int count) const;
virtual Datatype Create_vector(int count, int blocklength,
int stride) const;
virtual Datatype Create_indexed(int count,
const int array_of_blocklengths[],
const int array_of_displacements[]) const;
static Datatype Create_struct(int count, const int array_of_blocklengths[],
const Aint array_of_displacements[],
const Datatype array_if_types[]);
virtual Datatype Create_hindexed(int count, const int array_of_blocklengths[],
const Aint array_of_displacements[]) const;
virtual Datatype Create_hvector(int count, int blocklength, Aint stride) const;
virtual Datatype Create_indexed_block(int count, int blocklength,
const int array_of_blocklengths[]) const;
virtual Datatype Create_resized(const Aint lb, const Aint extent) const;
virtual int Get_size() const;
virtual void Get_extent(Aint& lb, Aint& extent) const;
virtual void Get_true_extent(Aint&, Aint&) const;
virtual void Commit();
virtual void Free();
virtual void Pack(const void* inbuf, int incount, void *outbuf,
int outsize, int& position, const Comm &comm) const;
virtual void Unpack(const void* inbuf, int insize, void *outbuf, int outcount,
int& position, const Comm& comm) const;
virtual int Pack_size(int incount, const Comm& comm) const;
virtual void Pack_external(const char* datarep, const void* inbuf, int incount,
void* outbuf, Aint outsize, Aint& position) const;
virtual Aint Pack_external_size(const char* datarep, int incount) const;
virtual void Unpack_external(const char* datarep, const void* inbuf,
Aint insize, Aint& position, void* outbuf, int outcount) const;
//
// Miscellany
//
virtual Datatype Create_subarray(int ndims, const int array_of_sizes[],
const int array_of_subsizes[],
const int array_of_starts[], int order)
const;
virtual Datatype Create_darray(int size, int rank, int ndims,
const int array_of_gsizes[], const int array_of_distribs[],
const int array_of_dargs[], const int array_of_psizes[],
int order) const;
// Language Binding
static Datatype Create_f90_complex(int p, int r);
static Datatype Create_f90_integer(int r);
static Datatype Create_f90_real(int p, int r);
static Datatype Match_size(int typeclass, int size);
//
// External Interfaces
//
virtual Datatype Dup() const;
// Need 4 overloaded versions of this function because per the
// MPI-2 spec, you can mix-n-match the C predefined functions with
// C++ functions.
static int Create_keyval(Copy_attr_function* type_copy_attr_fn,
Delete_attr_function* type_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Type_copy_attr_function* type_copy_attr_fn,
MPI_Type_delete_attr_function* type_delete_attr_fn,
void* extra_state);
static int Create_keyval(Copy_attr_function* type_copy_attr_fn,
MPI_Type_delete_attr_function* type_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Type_copy_attr_function* type_copy_attr_fn,
Delete_attr_function* type_delete_attr_fn,
void* extra_state);
protected:
// Back-end function to do the heavy lifting for creating the
// keyval
static int do_create_keyval(MPI_Type_copy_attr_function* c_copy_fn,
MPI_Type_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval);
public:
virtual void Delete_attr(int type_keyval);
static void Free_keyval(int& type_keyval);
virtual bool Get_attr(int type_keyval, void* attribute_val) const;
virtual void Get_contents(int max_integers, int max_addresses,
int max_datatypes, int array_of_integers[],
Aint array_of_addresses[],
Datatype array_of_datatypes[]) const;
virtual void Get_envelope(int& num_integers, int& num_addresses,
int& num_datatypes, int& combiner) const;
virtual void Get_name(char* type_name, int& resultlen) const;
virtual void Set_attr(int type_keyval, const void* attribute_val);
virtual void Set_name(const char* type_name);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Datatype pmpi_datatype;
#else
protected:
MPI_Datatype mpi_datatype;
#endif
public:
// Data that is passed through keyval create when C++ callback
// functions are used
struct keyval_intercept_data_t {
MPI_Type_copy_attr_function *c_copy_fn;
MPI_Type_delete_attr_function *c_delete_fn;
Copy_attr_function* cxx_copy_fn;
Delete_attr_function* cxx_delete_fn;
void *extra_state;
};
// Protect the global list from multiple thread access
static opal_mutex_t cxx_extra_states_lock;
};

Просмотреть файл

@ -1,418 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Point-to-Point Communication
//
inline MPI::Datatype
MPI::Datatype::Create_contiguous(int count) const
{
MPI_Datatype newtype;
(void)MPI_Type_contiguous(count, mpi_datatype, &newtype);
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_vector(int count, int blocklength,
int stride) const
{
MPI_Datatype newtype;
(void)MPI_Type_vector(count, blocklength, stride, mpi_datatype, &newtype);
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_indexed(int count,
const int array_of_blocklengths[],
const int array_of_displacements[]) const
{
MPI_Datatype newtype;
(void)MPI_Type_indexed(count, const_cast<int *>(array_of_blocklengths),
const_cast<int *>(array_of_displacements), mpi_datatype, &newtype);
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_struct(int count, const int array_of_blocklengths[],
const MPI::Aint array_of_displacements[],
const MPI::Datatype array_of_types[])
{
MPI_Datatype newtype;
int i;
MPI_Datatype* type_array = new MPI_Datatype[count];
for (i=0; i < count; i++)
type_array[i] = array_of_types[i];
(void)MPI_Type_create_struct(count, const_cast<int *>(array_of_blocklengths),
const_cast<MPI_Aint*>(array_of_displacements),
type_array, &newtype);
delete[] type_array;
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_hindexed(int count, const int array_of_blocklengths[],
const MPI::Aint array_of_displacements[]) const
{
MPI_Datatype newtype;
(void)MPI_Type_create_hindexed(count, const_cast<int *>(array_of_blocklengths),
const_cast<MPI_Aint*>(array_of_displacements),
mpi_datatype, &newtype) ;
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_hvector(int count, int blocklength,
MPI::Aint stride) const
{
MPI_Datatype newtype;
(void)MPI_Type_create_hvector(count, blocklength, (MPI_Aint)stride,
mpi_datatype, &newtype);
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_indexed_block(int count, int blocklength,
const int array_of_displacements[]) const
{
MPI_Datatype newtype;
(void)MPI_Type_create_indexed_block(count, blocklength, const_cast<int *>(array_of_displacements),
mpi_datatype, &newtype);
return newtype;
}
inline MPI::Datatype
MPI::Datatype::Create_resized(const MPI::Aint lb, const MPI::Aint extent) const
{
MPI_Datatype newtype;
(void) MPI_Type_create_resized(mpi_datatype, lb, extent, &newtype);
return(newtype);
}
inline int
MPI::Datatype::Get_size() const
{
int size;
(void)MPI_Type_size(mpi_datatype, &size);
return size;
}
inline void
MPI::Datatype::Get_extent(MPI::Aint& lb, MPI::Aint& extent) const
{
(void)MPI_Type_get_extent(mpi_datatype, &lb, &extent);
}
inline void
MPI::Datatype::Get_true_extent(MPI::Aint& lb, MPI::Aint& extent) const
{
(void) MPI_Type_get_true_extent(mpi_datatype, &lb, &extent);
}
inline void
MPI::Datatype::Commit()
{
(void)MPI_Type_commit(&mpi_datatype);
}
inline void
MPI::Datatype::Pack(const void* inbuf, int incount,
void *outbuf, int outsize,
int& position, const MPI::Comm &comm) const
{
(void)MPI_Pack(const_cast<void *>(inbuf), incount, mpi_datatype, outbuf,
outsize, &position, comm);
}
inline void
MPI::Datatype::Unpack(const void* inbuf, int insize,
void *outbuf, int outcount, int& position,
const MPI::Comm& comm) const
{
(void)MPI_Unpack(const_cast<void *>(inbuf), insize, &position,
outbuf, outcount, mpi_datatype, comm);
}
inline int
MPI::Datatype::Pack_size(int incount, const MPI::Comm& comm) const
{
int size;
(void)MPI_Pack_size(incount, mpi_datatype, comm, &size);
return size;
}
inline void
MPI::Datatype::Pack_external(const char* datarep, const void* inbuf, int incount,
void* outbuf, MPI::Aint outsize, MPI::Aint& position) const
{
(void)MPI_Pack_external(const_cast<char *>(datarep), const_cast<void *>(inbuf),
incount, mpi_datatype, outbuf, outsize, &position);
}
inline MPI::Aint
MPI::Datatype::Pack_external_size(const char* datarep, int incount) const
{
MPI_Aint addr;
(void)MPI_Pack_external_size(const_cast<char *>(datarep), incount, mpi_datatype, &addr);
return addr;
}
inline void
MPI::Datatype::Unpack_external(const char* datarep, const void* inbuf,
MPI::Aint insize, MPI::Aint& position, void* outbuf, int outcount) const
{
(void)MPI_Unpack_external(const_cast<char *>(datarep), const_cast<void *>(inbuf),
insize, &position, outbuf, outcount, mpi_datatype);
}
//
// Miscellany
//
inline MPI::Datatype
MPI::Datatype::Create_subarray(int ndims, const int array_of_sizes[],
const int array_of_subsizes[],
const int array_of_starts[], int order)
const
{
MPI_Datatype type;
(void) MPI_Type_create_subarray(ndims, const_cast<int *>(array_of_sizes),
const_cast<int *>(array_of_subsizes),
const_cast<int *>(array_of_starts),
order, mpi_datatype, &type);
return type;
}
inline MPI::Datatype
MPI::Datatype::Create_darray(int size, int rank, int ndims,
const int array_of_gsizes[], const int array_of_distribs[],
const int array_of_dargs[], const int array_of_psizes[],
int order) const
{
MPI_Datatype type;
(void) MPI_Type_create_darray(size, rank, ndims,
const_cast<int *>(array_of_gsizes),
const_cast<int *>(array_of_distribs),
const_cast<int *>(array_of_dargs),
const_cast<int *>(array_of_psizes),
order, mpi_datatype, &type);
return type;
}
inline MPI::Datatype
MPI::Datatype::Create_f90_complex(int p, int r)
{
MPI_Datatype type;
(void) MPI_Type_create_f90_complex(p, r, &type);
return type;
}
inline MPI::Datatype
MPI::Datatype::Create_f90_integer(int r)
{
MPI_Datatype type;
(void) MPI_Type_create_f90_integer(r, &type);
return type;
}
inline MPI::Datatype
MPI::Datatype::Create_f90_real(int p, int r)
{
MPI_Datatype type;
(void) MPI_Type_create_f90_real(p, r, &type);
return type;
}
inline MPI::Datatype
MPI::Datatype::Match_size(int typeclass, int size)
{
MPI_Datatype type;
(void) MPI_Type_match_size(typeclass, size, &type);
return type;
}
//
// External Interfaces
//
inline MPI::Datatype
MPI::Datatype::Dup() const
{
MPI_Datatype type;
(void) MPI_Type_dup(mpi_datatype, &type);
return type;
}
// 1) original Create_keyval that takes the first 2 arguments as C++
// functions
inline int
MPI::Datatype::Create_keyval(MPI::Datatype::Copy_attr_function* type_copy_attr_fn,
MPI::Datatype::Delete_attr_function* type_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, NULL,
type_copy_attr_fn, type_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 2) overload Create_keyval to take the first 2 arguments as C
// functions
inline int
MPI::Datatype::Create_keyval(MPI_Type_copy_attr_function* type_copy_attr_fn,
MPI_Type_delete_attr_function* type_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(type_copy_attr_fn, type_delete_attr_fn,
NULL, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 3) overload Create_keyval to take the first 2 arguments as C++ & C
// functions
inline int
MPI::Datatype::Create_keyval(MPI::Datatype::Copy_attr_function* type_copy_attr_fn,
MPI_Type_delete_attr_function* type_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, type_delete_attr_fn,
type_copy_attr_fn, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 4) overload Create_keyval to take the first 2 arguments as C & C++
// functions
inline int
MPI::Datatype::Create_keyval(MPI_Type_copy_attr_function* type_copy_attr_fn,
MPI::Datatype::Delete_attr_function* type_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(type_copy_attr_fn, NULL,
NULL, type_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
inline void
MPI::Datatype::Delete_attr(int type_keyval)
{
(void) MPI_Type_delete_attr(mpi_datatype, type_keyval);
}
inline void
MPI::Datatype::Free_keyval(int& type_keyval)
{
(void) MPI_Type_free_keyval(&type_keyval);
}
inline bool
MPI::Datatype::Get_attr(int type_keyval,
void* attribute_val) const
{
int ret;
(void) MPI_Type_get_attr(mpi_datatype, type_keyval, attribute_val, &ret);
return OPAL_INT_TO_BOOL(ret);
}
inline void
MPI::Datatype::Get_contents(int max_integers, int max_addresses,
int max_datatypes, int array_of_integers[],
MPI::Aint array_of_addresses[],
MPI::Datatype array_of_datatypes[]) const
{
int i;
MPI_Datatype *c_datatypes = new MPI_Datatype[max_datatypes];
(void) MPI_Type_get_contents(mpi_datatype, max_integers, max_addresses,
max_datatypes,
const_cast<int *>(array_of_integers),
const_cast<MPI_Aint*>(array_of_addresses),
c_datatypes);
// Convert the C MPI_Datatypes to the user's OUT MPI::Datatype
// array parameter
for (i = 0; i < max_datatypes; ++i) {
array_of_datatypes[i] = c_datatypes[i];
}
delete[] c_datatypes;
}
inline void
MPI::Datatype::Get_envelope(int& num_integers, int& num_addresses,
int& num_datatypes, int& combiner) const
{
(void) MPI_Type_get_envelope(mpi_datatype, &num_integers, &num_addresses,
&num_datatypes, &combiner);
}
inline void
MPI::Datatype::Get_name(char* type_name, int& resultlen) const
{
(void) MPI_Type_get_name(mpi_datatype, type_name, &resultlen);
}
inline void
MPI::Datatype::Set_attr(int type_keyval, const void* attribute_val)
{
(void) MPI_Type_set_attr(mpi_datatype, type_keyval, const_cast<void *>(attribute_val));
}
inline void
MPI::Datatype::Set_name(const char* type_name)
{
(void) MPI_Type_set_name(mpi_datatype, const_cast<char *>(type_name));
}
#if 0
//
// User Defined Functions
//
typedef int MPI::Datatype::Copy_attr_function(const Datatype& oldtype,
int type_keyval,
void* extra_state,
void* attribute_val_in,
void* attribute_val_out,
bool& flag);
typedef int MPI::Datatype::Delete_attr_function(Datatype& type,
int type_keyval,
void* attribute_val,
void* extra_state);
#endif

Просмотреть файл

@ -1,63 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Errhandler {
public:
// construction / destruction
inline Errhandler()
: mpi_errhandler(MPI_ERRHANDLER_NULL) {}
inline virtual ~Errhandler() { }
inline Errhandler(MPI_Errhandler i)
: mpi_errhandler(i) {}
// copy / assignment
inline Errhandler(const Errhandler& e) : mpi_errhandler(e.mpi_errhandler) { }
inline Errhandler& operator=(const Errhandler& e) {
mpi_errhandler = e.mpi_errhandler;
return *this;
}
// comparison
inline bool operator==(const Errhandler &a) {
return (bool)(mpi_errhandler == a.mpi_errhandler); }
inline bool operator!=(const Errhandler &a) {
return (bool)!(*this == a); }
// inter-language operability
inline Errhandler& operator= (const MPI_Errhandler &i) {
mpi_errhandler = i; return *this; }
inline operator MPI_Errhandler() const { return mpi_errhandler; }
// inline operator MPI_Errhandler*() { return &mpi_errhandler; }
//
// Errhandler access functions
//
virtual void Free();
private:
MPI_Errhandler mpi_errhandler;
};

Просмотреть файл

@ -1,49 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
inline PMPI::Errhandler::Errhandler(const PMPI::Errhandler& e)
: handler_fn(e.handler_fn), mpi_errhandler(e.mpi_errhandler) { }
inline PMPI::Errhandler&
PMPI::Errhandler::operator=(const PMPI::Errhandler& e)
{
handler_fn = e.handler_fn;
mpi_errhandler = e.mpi_errhandler;
return *this;
}
inline bool
PMPI::Errhandler::operator==(const PMPI::Errhandler &a)
{
return (MPI2CPP_BOOL_T)(mpi_errhandler == a.mpi_errhandler);
}
#endif
inline void
MPI::Errhandler::Free()
{
(void)MPI_Errhandler_free(&mpi_errhandler);
}

Просмотреть файл

@ -1,74 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Exception {
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
inline Exception(int ec) : pmpi_exception(ec) { }
int Get_error_code() const;
int Get_error_class() const;
const char* Get_error_string() const;
#else
inline Exception(int ec) : error_code(ec), error_string(0), error_class(-1) {
(void)MPI_Error_class(error_code, &error_class);
int resultlen;
error_string = new char[MAX_ERROR_STRING];
(void)MPI_Error_string(error_code, error_string, &resultlen);
}
inline ~Exception() {
delete[] error_string;
}
// Better put in a copy constructor here since we have a string;
// copy by value (from the default copy constructor) would be
// disasterous.
inline Exception(const Exception& a)
: error_code(a.error_code), error_class(a.error_class)
{
error_string = new char[MAX_ERROR_STRING];
// Rather that force an include of <string.h>, especially this
// late in the game (recall that this file is included deep in
// other .h files), we'll just do the copy ourselves.
for (int i = 0; i < MAX_ERROR_STRING; i++)
error_string[i] = a.error_string[i];
}
inline int Get_error_code() const { return error_code; }
inline int Get_error_class() const { return error_class; }
inline const char* Get_error_string() const { return error_string; }
#endif
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Exception pmpi_exception;
#else
int error_code;
char* error_string;
int error_class;
#endif
};

Просмотреть файл

@ -1,185 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2006-2016 Los Alamos National Security, LLC. All rights
// reserved.
// Copyright (c) 2007-2009 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// Do not include ompi_config.h before mpi.h because it causes
// malloc/free problems due to setting OMPI_BUILDING to 1
#include "mpi.h"
#include "ompi/constants.h"
#include "ompi/mpi/cxx/mpicxx.h"
#include "cxx_glue.h"
void
MPI::File::Close()
{
(void) MPI_File_close(&mpi_file);
}
MPI::Errhandler
MPI::File::Create_errhandler(MPI::File::Errhandler_function* function)
{
return ompi_cxx_errhandler_create_file ((ompi_cxx_dummy_fn_t *) function);
}
//
// Infrastructure for MPI_REGISTER_DATAREP
//
// Similar to what we have to do in the F77 bindings: call the C
// MPI_Register_datarep function with "intercept" callback functions
// that conform to the C bindings. In these intercepts, convert the
// arguments to C++ calling convertions, and then invoke the actual
// C++ callbacks.
// Data structure passed to the intercepts (see below). It is an OPAL
// list_item_t so that we can clean this memory up during
// MPI_FINALIZE.
// Intercept function for read conversions
static int read_intercept_fn(void *userbuf, MPI_Datatype type_c, int count_c,
void *filebuf, MPI_Offset position_c,
void *extra_state)
{
MPI::Datatype type_cxx(type_c);
MPI::Offset position_cxx(position_c);
ompi_cxx_intercept_file_extra_state_t *intercept_data =
(ompi_cxx_intercept_file_extra_state_t*) extra_state;
MPI::Datarep_conversion_function *read_fn_cxx =
(MPI::Datarep_conversion_function *) intercept_data->read_fn_cxx;
read_fn_cxx (userbuf, type_cxx, count_c, filebuf, position_cxx,
intercept_data->extra_state_cxx);
return MPI_SUCCESS;
}
// Intercept function for write conversions
static int write_intercept_fn(void *userbuf, MPI_Datatype type_c, int count_c,
void *filebuf, MPI_Offset position_c,
void *extra_state)
{
MPI::Datatype type_cxx(type_c);
MPI::Offset position_cxx(position_c);
ompi_cxx_intercept_file_extra_state_t *intercept_data =
(ompi_cxx_intercept_file_extra_state_t*) extra_state;
MPI::Datarep_conversion_function *write_fn_cxx =
(MPI::Datarep_conversion_function *) intercept_data->write_fn_cxx;
write_fn_cxx (userbuf, type_cxx, count_c, filebuf, position_cxx,
intercept_data->extra_state_cxx);
return MPI_SUCCESS;
}
// Intercept function for extent calculations
static int extent_intercept_fn(MPI_Datatype type_c, MPI_Aint *file_extent_c,
void *extra_state)
{
MPI::Datatype type_cxx(type_c);
MPI::Aint file_extent_cxx(*file_extent_c);
ompi_cxx_intercept_file_extra_state_t *intercept_data =
(ompi_cxx_intercept_file_extra_state_t*) extra_state;
MPI::Datarep_extent_function *extent_fn_cxx =
(MPI::Datarep_extent_function *) intercept_data->extent_fn_cxx;
extent_fn_cxx (type_cxx, file_extent_cxx, intercept_data->extra_state_cxx);
*file_extent_c = file_extent_cxx;
return MPI_SUCCESS;
}
// C++ bindings for MPI::Register_datarep
void
MPI::Register_datarep(const char* datarep,
Datarep_conversion_function* read_fn_cxx,
Datarep_conversion_function* write_fn_cxx,
Datarep_extent_function* extent_fn_cxx,
void* extra_state_cxx)
{
ompi_cxx_intercept_file_extra_state_t *intercept;
intercept = ompi_cxx_new_intercept_state ((void *) read_fn_cxx, (void *) write_fn_cxx,
(void *) extent_fn_cxx, extra_state_cxx);
if (NULL == intercept) {
ompi_cxx_errhandler_invoke_file (MPI_FILE_NULL, OMPI_ERR_OUT_OF_RESOURCE,
"MPI::Register_datarep");
return;
}
(void)MPI_Register_datarep (const_cast<char*>(datarep), read_intercept_fn,
write_intercept_fn, extent_intercept_fn, intercept);
}
void
MPI::Register_datarep(const char* datarep,
MPI_Datarep_conversion_function* read_fn_c,
Datarep_conversion_function* write_fn_cxx,
Datarep_extent_function* extent_fn_cxx,
void* extra_state_cxx)
{
ompi_cxx_intercept_file_extra_state_t *intercept;
intercept = ompi_cxx_new_intercept_state (NULL, (void *) write_fn_cxx, (void *) extent_fn_cxx,
extra_state_cxx);
if (NULL == intercept) {
ompi_cxx_errhandler_invoke_file (MPI_FILE_NULL, OMPI_ERR_OUT_OF_RESOURCE,
"MPI::Register_datarep");
return;
}
(void)MPI_Register_datarep (const_cast<char*>(datarep), read_fn_c, write_intercept_fn,
extent_intercept_fn, intercept);
}
void
MPI::Register_datarep(const char* datarep,
Datarep_conversion_function* read_fn_cxx,
MPI_Datarep_conversion_function* write_fn_c,
Datarep_extent_function* extent_fn_cxx,
void* extra_state_cxx)
{
ompi_cxx_intercept_file_extra_state_t *intercept;
intercept = ompi_cxx_new_intercept_state ((void *) read_fn_cxx, NULL, (void *) extent_fn_cxx,
extra_state_cxx);
if (NULL == intercept) {
ompi_cxx_errhandler_invoke_file (MPI_FILE_NULL, OMPI_ERR_OUT_OF_RESOURCE,
"MPI::Register_datarep");
return;
}
(void)MPI_Register_datarep (const_cast<char*>(datarep), read_intercept_fn, write_fn_c,
extent_intercept_fn, intercept);
}
void
MPI::Register_datarep(const char* datarep,
MPI_Datarep_conversion_function* read_fn_c,
MPI_Datarep_conversion_function* write_fn_c,
Datarep_extent_function* extent_fn_cxx,
void* extra_state_cxx)
{
ompi_cxx_intercept_file_extra_state_t *intercept;
intercept = ompi_cxx_new_intercept_state (NULL, NULL, (void *) extent_fn_cxx, extra_state_cxx);
if (NULL == intercept) {
ompi_cxx_errhandler_invoke_file (MPI_FILE_NULL, OMPI_ERR_OUT_OF_RESOURCE,
"MPI::Register_datarep");
return;
}
(void)MPI_Register_datarep (const_cast<char*>(datarep), read_fn_c, write_fn_c,
extent_intercept_fn, intercept);
}

Просмотреть файл

@ -1,317 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// Typedefs for C++ callbacks registered via MPI::Register_datarep
typedef void Datarep_extent_function(const Datatype& datatype,
Aint& file_extent, void* extra_state);
typedef void Datarep_conversion_function(void* userbuf, Datatype& datatype,
int count, void* filebuf,
Offset position, void* extra_state);
// Both callback functions in C++
void Register_datarep(const char* datarep,
Datarep_conversion_function* read_conversion_fn,
Datarep_conversion_function* write_conversion_fn,
Datarep_extent_function* dtype_file_extent_fn,
void* extra_state);
// Overload for C read callback function (MPI_CONVERSION_FN_NULL)
void Register_datarep(const char* datarep,
MPI_Datarep_conversion_function* read_conversion_fn,
Datarep_conversion_function* write_conversion_fn,
Datarep_extent_function* dtype_file_extent_fn,
void* extra_state);
// Overload for C write callback function (MPI_CONVERSION_FN_NULL)
void Register_datarep(const char* datarep,
Datarep_conversion_function* read_conversion_fn,
MPI_Datarep_conversion_function* write_conversion_fn,
Datarep_extent_function* dtype_file_extent_fn,
void* extra_state);
// Overload for C read and write callback functions (MPI_CONVERSION_FN_NULL)
void Register_datarep(const char* datarep,
MPI_Datarep_conversion_function* read_conversion_fn,
MPI_Datarep_conversion_function* write_conversion_fn,
Datarep_extent_function* dtype_file_extent_fn,
void* extra_state);
class File {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class P;
#endif
friend class MPI::Comm; //so I can access pmpi_file data member in comm.cc
friend class MPI::Request; //and also from request.cc
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction / destruction
File() { }
virtual ~File() { }
// copy / assignment
File(const File& data) : pmpi_file(data.pmpi_file) { }
File(MPI_File i) : pmpi_file(i) { }
File& operator=(const File& data) {
pmpi_file = data.pmpi_file; return *this; }
// comparison, don't need for file
// inter-language operability
File& operator= (const MPI_File &i) {
pmpi_file = i; return *this; }
operator MPI_File () const { return pmpi_file; }
// operator MPI_File* () const { return pmpi_file; }
operator const PMPI::File&() const { return pmpi_file; }
#else
File() : mpi_file(MPI_FILE_NULL) { }
// copy
File(const File& data) : mpi_file(data.mpi_file) { }
File(MPI_File i) : mpi_file(i) { }
virtual ~File() { }
File& operator=(const File& data) {
mpi_file = data.mpi_file; return *this; }
// comparison, don't need for file
// inter-language operability
File& operator= (const MPI_File &i) {
mpi_file = i; return *this; }
operator MPI_File () const { return mpi_file; }
// operator MPI_File* () const { return (MPI_File*)&mpi_file; }
#endif
// from the I/o chapter of MPI - 2
void Close();
static void Delete(const char* filename, const MPI::Info& info);
int Get_amode() const;
bool Get_atomicity() const;
MPI::Offset Get_byte_offset(const MPI::Offset disp) const;
MPI::Group Get_group() const;
MPI::Info Get_info() const;
MPI::Offset Get_position() const;
MPI::Offset Get_position_shared() const;
MPI::Offset Get_size() const;
MPI::Aint Get_type_extent(const MPI::Datatype& datatype) const;
void Get_view(MPI::Offset& disp, MPI::Datatype& etype,
MPI::Datatype& filetype, char* datarep) const;
MPI::Request Iread(void* buf, int count,
const MPI::Datatype& datatype);
MPI::Request Iread_at(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype);
MPI::Request Iread_shared(void* buf, int count,
const MPI::Datatype& datatype);
MPI::Request Iwrite(const void* buf, int count,
const MPI::Datatype& datatype);
MPI::Request Iwrite_at(MPI::Offset offset, const void* buf,
int count, const MPI::Datatype& datatype);
MPI::Request Iwrite_shared(const void* buf, int count,
const MPI::Datatype& datatype);
static MPI::File Open(const MPI::Intracomm& comm,
const char* filename, int amode,
const MPI::Info& info);
void Preallocate(MPI::Offset size);
void Read(void* buf, int count, const MPI::Datatype& datatype);
void Read(void* buf, int count, const MPI::Datatype& datatype,
MPI::Status& status);
void Read_all(void* buf, int count, const MPI::Datatype& datatype);
void Read_all(void* buf, int count, const MPI::Datatype& datatype,
MPI::Status& status);
void Read_all_begin(void* buf, int count,
const MPI::Datatype& datatype);
void Read_all_end(void* buf);
void Read_all_end(void* buf, MPI::Status& status);
void Read_at(MPI::Offset offset,
void* buf, int count, const MPI::Datatype& datatype);
void Read_at(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Read_at_all(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype);
void Read_at_all(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Read_at_all_begin(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype);
void Read_at_all_end(void* buf);
void Read_at_all_end(void* buf, MPI::Status& status);
void Read_ordered(void* buf, int count,
const MPI::Datatype& datatype);
void Read_ordered(void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status);
void Read_ordered_begin(void* buf, int count,
const MPI::Datatype& datatype);
void Read_ordered_end(void* buf);
void Read_ordered_end(void* buf, MPI::Status& status);
void Read_shared(void* buf, int count,
const MPI::Datatype& datatype);
void Read_shared(void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Seek(MPI::Offset offset, int whence);
void Seek_shared(MPI::Offset offset, int whence);
void Set_atomicity(bool flag);
void Set_info(const MPI::Info& info);
void Set_size(MPI::Offset size);
void Set_view(MPI::Offset disp, const MPI::Datatype& etype,
const MPI::Datatype& filetype, const char* datarep,
const MPI::Info& info);
void Sync();
void Write(const void* buf, int count,
const MPI::Datatype& datatype);
void Write(const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Write_all(const void* buf, int count,
const MPI::Datatype& datatype);
void Write_all(const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Write_all_begin(const void* buf, int count,
const MPI::Datatype& datatype);
void Write_all_end(const void* buf);
void Write_all_end(const void* buf, MPI::Status& status);
void Write_at(MPI::Offset offset, const void* buf, int count,
const MPI::Datatype& datatype);
void Write_at(MPI::Offset offset, const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Write_at_all(MPI::Offset offset, const void* buf, int count,
const MPI::Datatype& datatype);
void Write_at_all(MPI::Offset offset, const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status);
void Write_at_all_begin(MPI::Offset offset, const void* buf,
int count, const MPI::Datatype& datatype);
void Write_at_all_end(const void* buf);
void Write_at_all_end(const void* buf, MPI::Status& status);
void Write_ordered(const void* buf, int count,
const MPI::Datatype& datatype);
void Write_ordered(const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
void Write_ordered_begin(const void* buf, int count,
const MPI::Datatype& datatype);
void Write_ordered_end(const void* buf);
void Write_ordered_end(const void* buf, MPI::Status& status);
void Write_shared(const void* buf, int count,
const MPI::Datatype& datatype);
void Write_shared(const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status);
//
// Errhandler
//
typedef void Errhandler_function(MPI::File &, int *, ... );
typedef Errhandler_function Errhandler_fn
__mpi_interface_deprecated__("MPI::File::Errhandler_fn was deprecated in MPI-2.2; use MPI::File::Errhandler_function instead");
static MPI::Errhandler Create_errhandler(Errhandler_function* function);
MPI::Errhandler Get_errhandler() const;
void Set_errhandler(const MPI::Errhandler& errhandler) const;
void Call_errhandler(int errorcode) const;
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::File pmpi_file;
#else
MPI_File mpi_file;
#endif
};

Просмотреть файл

@ -1,655 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
inline void
MPI::File::Delete(const char* filename, const MPI::Info& info)
{
(void) MPI_File_delete(const_cast<char *>(filename), info);
}
inline int
MPI::File::Get_amode() const
{
int amode;
(void) MPI_File_get_amode(mpi_file, &amode);
return amode;
}
inline bool
MPI::File::Get_atomicity() const
{
int flag;
(void) MPI_File_get_atomicity(mpi_file, &flag);
return OPAL_INT_TO_BOOL(flag);
}
inline MPI::Offset
MPI::File::Get_byte_offset(const MPI::Offset disp) const
{
MPI_Offset offset, ldisp;
ldisp = disp;
(void) MPI_File_get_byte_offset(mpi_file, ldisp, &offset);
return offset;
}
inline MPI::Group
MPI::File::Get_group() const
{
MPI_Group group;
(void) MPI_File_get_group(mpi_file, &group);
return group;
}
inline MPI::Info
MPI::File::Get_info() const
{
MPI_Info info_used;
(void) MPI_File_get_info(mpi_file, &info_used);
return info_used;
}
inline MPI::Offset
MPI::File::Get_position() const
{
MPI_Offset offset;
(void) MPI_File_get_position(mpi_file, &offset);
return offset;
}
inline MPI::Offset
MPI::File::Get_position_shared() const
{
MPI_Offset offset;
(void) MPI_File_get_position_shared(mpi_file, &offset);
return offset;
}
inline MPI::Offset
MPI::File::Get_size() const
{
MPI_Offset offset;
(void) MPI_File_get_size(mpi_file, &offset);
return offset;
}
inline MPI::Aint
MPI::File::Get_type_extent(const MPI::Datatype& datatype) const
{
MPI_Aint extent;
(void) MPI_File_get_type_extent(mpi_file, datatype, &extent);
return extent;
}
inline void
MPI::File::Get_view(MPI::Offset& disp,
MPI::Datatype& etype,
MPI::Datatype& filetype,
char* datarep) const
{
MPI_Datatype type, ftype;
type = etype;
ftype = filetype;
MPI::Offset odisp = disp;
(void) MPI_File_get_view(mpi_file, &odisp, &type, &ftype,
datarep);
}
inline MPI::Request
MPI::File::Iread(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iread(mpi_file, buf, count, datatype, &req);
return req;
}
inline MPI::Request
MPI::File::Iread_at(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iread_at(mpi_file, offset, buf, count, datatype, &req);
return req;
}
inline MPI::Request
MPI::File::Iread_shared(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iread_shared(mpi_file, buf, count, datatype, &req);
return req;
}
inline MPI::Request
MPI::File::Iwrite(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iwrite(mpi_file, const_cast<void *>(buf), count, datatype, &req);
return req;
}
inline MPI::Request
MPI::File::Iwrite_at(MPI::Offset offset, const void* buf,
int count, const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iwrite_at(mpi_file, offset, const_cast<void *>(buf), count, datatype,
&req);
return req;
}
inline MPI::Request
MPI::File::Iwrite_shared(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Request req;
(void) MPI_File_iwrite_shared(mpi_file, const_cast<void *>(buf), count, datatype, &req);
return req;
}
inline MPI::File
MPI::File::Open(const MPI::Intracomm& comm,
const char* filename, int amode,
const MPI::Info& info)
{
MPI_File fh;
(void) MPI_File_open(comm, const_cast<char *>(filename), amode, info, &fh);
return fh;
}
inline void
MPI::File::Preallocate(MPI::Offset size)
{
(void) MPI_File_preallocate(mpi_file, size);
}
inline void
MPI::File::Read(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read(mpi_file, buf, count, datatype, &status);
}
inline void
MPI::File::Read(void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read(mpi_file, buf, count, datatype, &status.mpi_status);
}
inline void
MPI::File::Read_all(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read_all(mpi_file, buf, count, datatype, &status);
}
inline void
MPI::File::Read_all(void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read_all(mpi_file, buf, count, datatype, &status.mpi_status);
}
inline void
MPI::File::Read_all_begin(void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_read_all_begin(mpi_file, buf, count, datatype);
}
inline void
MPI::File::Read_all_end(void* buf)
{
MPI_Status status;
(void) MPI_File_read_all_end(mpi_file, buf, &status);
}
inline void
MPI::File::Read_all_end(void* buf, MPI::Status& status)
{
(void) MPI_File_read_all_end(mpi_file, buf, &status.mpi_status);
}
inline void
MPI::File::Read_at(MPI::Offset offset,
void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read_at(mpi_file, offset, buf, count, datatype, &status);
}
inline void
MPI::File::Read_at(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read_at(mpi_file, offset, buf, count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Read_at_all(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read_at_all(mpi_file, offset, buf, count, datatype, &status);
}
inline void
MPI::File::Read_at_all(MPI::Offset offset, void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read_at_all(mpi_file, offset, buf, count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Read_at_all_begin(MPI::Offset offset,
void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_read_at_all_begin(mpi_file, offset, buf, count, datatype);
}
inline void
MPI::File::Read_at_all_end(void* buf)
{
MPI_Status status;
(void) MPI_File_read_at_all_end(mpi_file, buf, &status);
}
inline void
MPI::File::Read_at_all_end(void* buf, MPI::Status& status)
{
(void) MPI_File_read_at_all_end(mpi_file, buf, &status.mpi_status);
}
inline void
MPI::File::Read_ordered(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read_ordered(mpi_file, buf, count, datatype, &status);
}
inline void
MPI::File::Read_ordered(void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read_ordered(mpi_file, buf, count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Read_ordered_begin(void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_read_ordered_begin(mpi_file, buf, count, datatype);
}
inline void
MPI::File::Read_ordered_end(void* buf)
{
MPI_Status status;
(void) MPI_File_read_ordered_end(mpi_file, buf, &status);
}
inline void
MPI::File::Read_ordered_end(void* buf, MPI::Status& status)
{
(void) MPI_File_read_ordered_end(mpi_file, buf, &status.mpi_status);
}
inline void
MPI::File::Read_shared(void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_read_shared(mpi_file, buf, count, datatype, &status);
}
inline void
MPI::File::Read_shared(void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_read_shared(mpi_file, buf, count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Seek(MPI::Offset offset, int whence)
{
(void) MPI_File_seek(mpi_file, offset, whence);
}
inline void
MPI::File::Seek_shared(MPI::Offset offset, int whence)
{
(void) MPI_File_seek_shared(mpi_file, offset, whence);
}
inline void
MPI::File::Set_atomicity(bool flag)
{
(void) MPI_File_set_atomicity(mpi_file, flag);
}
inline void
MPI::File::Set_info(const MPI::Info& info)
{
(void) MPI_File_set_info(mpi_file, info);
}
inline void
MPI::File::Set_size(MPI::Offset size)
{
(void) MPI_File_set_size(mpi_file, size);
}
inline void
MPI::File::Set_view(MPI::Offset disp,
const MPI::Datatype& etype,
const MPI::Datatype& filetype,
const char* datarep,
const MPI::Info& info)
{
(void) MPI_File_set_view(mpi_file, disp, etype, filetype, const_cast<char *>(datarep),
info);
}
inline void
MPI::File::Sync()
{
(void) MPI_File_sync(mpi_file);
}
inline void
MPI::File::Write(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write(mpi_file, const_cast<void *>(buf), count, datatype, &status);
}
inline void
MPI::File::Write(const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_write(mpi_file, const_cast<void *>(buf), count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Write_all(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write_all(mpi_file, const_cast<void *>(buf), count, datatype, &status);
}
inline void
MPI::File::Write_all(const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_write_all(mpi_file, const_cast<void *>(buf), count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Write_all_begin(const void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_write_all_begin(mpi_file, const_cast<void *>(buf), count, datatype);
}
inline void
MPI::File::Write_all_end(const void* buf)
{
MPI_Status status;
(void) MPI_File_write_all_end(mpi_file, const_cast<void *>(buf), &status);
}
inline void
MPI::File::Write_all_end(const void* buf, MPI::Status& status)
{
(void) MPI_File_write_all_end(mpi_file, const_cast<void *>(buf), &status.mpi_status);
}
inline void
MPI::File::Write_at(MPI::Offset offset,
const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write_at(mpi_file, offset, const_cast<void *>(buf), count,
datatype, &status);
}
inline void
MPI::File::Write_at(MPI::Offset offset,
const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_write_at(mpi_file, offset, const_cast<void *>(buf), count,
datatype, &status.mpi_status);
}
inline void
MPI::File::Write_at_all(MPI::Offset offset,
const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write_at_all(mpi_file, offset, const_cast<void *>(buf), count,
datatype, &status);
}
inline void
MPI::File::Write_at_all(MPI::Offset offset,
const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_write_at_all(mpi_file, offset, const_cast<void *>(buf), count,
datatype, &status.mpi_status);
}
inline void
MPI::File::Write_at_all_begin(MPI::Offset offset,
const void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_write_at_all_begin(mpi_file, offset, const_cast<void *>(buf), count,
datatype);
}
inline void
MPI::File::Write_at_all_end(const void* buf)
{
MPI_Status status;
(void) MPI_File_write_at_all_end(mpi_file, const_cast<void *>(buf), &status);
}
inline void
MPI::File::Write_at_all_end(const void* buf, MPI::Status& status)
{
(void) MPI_File_write_at_all_end(mpi_file, const_cast<void *>(buf), &status.mpi_status);
}
inline void
MPI::File::Write_ordered(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write_ordered(mpi_file, const_cast<void *>(buf), count, datatype,
&status);
}
inline void
MPI::File::Write_ordered(const void* buf, int count,
const MPI::Datatype& datatype,
MPI::Status& status)
{
(void) MPI_File_write_ordered(mpi_file, const_cast<void *>(buf), count, datatype,
&status.mpi_status);
}
inline void
MPI::File::Write_ordered_begin(const void* buf, int count,
const MPI::Datatype& datatype)
{
(void) MPI_File_write_ordered_begin(mpi_file, const_cast<void *>(buf), count, datatype);
}
inline void
MPI::File::Write_ordered_end(const void* buf)
{
MPI_Status status;
(void) MPI_File_write_ordered_end(mpi_file, const_cast<void *>(buf), &status);
}
inline void
MPI::File::Write_ordered_end(const void* buf,
MPI::Status& status)
{
(void) MPI_File_write_ordered_end(mpi_file, const_cast<void *>(buf), &status.mpi_status);
}
inline void
MPI::File::Write_shared(const void* buf, int count,
const MPI::Datatype& datatype)
{
MPI_Status status;
(void) MPI_File_write_shared(mpi_file, const_cast<void *>(buf), count,
datatype, &status);
}
inline void
MPI::File::Write_shared(const void* buf, int count,
const MPI::Datatype& datatype, MPI::Status& status)
{
(void) MPI_File_write_shared(mpi_file, const_cast<void *>(buf), count,
datatype, &status.mpi_status);
}
inline void
MPI::File::Set_errhandler(const MPI::Errhandler& errhandler) const
{
(void)MPI_File_set_errhandler(mpi_file, errhandler);
}
inline MPI::Errhandler
MPI::File::Get_errhandler() const
{
MPI_Errhandler errhandler;
MPI_File_get_errhandler(mpi_file, &errhandler);
return errhandler;
}
inline void
MPI::File::Call_errhandler(int errorcode) const
{
(void) MPI_File_call_errhandler(mpi_file, errorcode);
}

Просмотреть файл

@ -1,156 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Point-to-Point Communication
//
void
Attach_buffer(void* buffer, int size);
int
Detach_buffer(void*& buffer);
//
// Process Topologies
//
void
Compute_dims(int nnodes, int ndims, int dims[]);
//
// Environmental Inquiry
//
int
Add_error_class();
int
Add_error_code(int errorclass);
void
Add_error_string(int errorcode, const char* string);
void
Get_processor_name(char* name, int& resultlen);
void
Get_error_string(int errorcode, char* string, int& resultlen);
int
Get_error_class(int errorcode);
double
Wtime();
double
Wtick();
void
Init(int& argc, char**& argv);
void
Init();
OMPI_DECLSPEC void
InitializeIntercepts();
void
Real_init();
void
Finalize();
bool
Is_initialized();
bool
Is_finalized();
//
// External Interfaces
//
int
Init_thread(int &argc, char**&argv, int required);
int
Init_thread(int required);
bool
Is_thread_main();
int
Query_thread();
//
// Miscellany
//
void*
Alloc_mem(Aint size, const Info& info);
void
Free_mem(void* base);
//
// Process Creation
//
void
Close_port(const char* port_name);
void
Lookup_name(const char* service_name, const Info& info, char* port_name);
void
Open_port(const Info& info, char* port_name);
void
Publish_name(const char* service_name, const Info& info,
const char* port_name);
void
Unpublish_name(const char* service_name, const Info& info,
const char* port_name);
//
// Profiling
//
void
Pcontrol(const int level, ...);
void
Get_version(int& version, int& subversion);
MPI::Aint
Get_address(void* location);

Просмотреть файл

@ -1,295 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
#include <string.h>
//
// Point-to-Point Communication
//
inline void
MPI::Attach_buffer(void* buffer, int size)
{
(void)MPI_Buffer_attach(buffer, size);
}
inline int
MPI::Detach_buffer(void*& buffer)
{
int size;
(void)MPI_Buffer_detach(&buffer, &size);
return size;
}
//
// Process Topologies
//
inline void
MPI::Compute_dims(int nnodes, int ndims, int dims[])
{
(void)MPI_Dims_create(nnodes, ndims, dims);
}
//
// Environmental Inquiry
//
inline int
MPI::Add_error_class()
{
int errcls;
(void)MPI_Add_error_class(&errcls);
return errcls;
}
inline int
MPI::Add_error_code(int errorclass)
{
int errcode;
(void)MPI_Add_error_code(errorclass, &errcode);
return errcode;
}
inline void
MPI::Add_error_string(int errorcode, const char* string)
{
(void)MPI_Add_error_string(errorcode, const_cast<char *>(string));
}
inline void
MPI::Get_processor_name(char* name, int& resultlen)
{
(void)MPI_Get_processor_name(name, &resultlen);
}
inline void
MPI::Get_error_string(int errorcode, char* string, int& resultlen)
{
(void)MPI_Error_string(errorcode, string, &resultlen);
}
inline int
MPI::Get_error_class(int errorcode)
{
int errorclass;
(void)MPI_Error_class(errorcode, &errorclass);
return errorclass;
}
inline double
MPI::Wtime()
{
return (MPI_Wtime());
}
inline double
MPI::Wtick()
{
return (MPI_Wtick());
}
inline void
MPI::Real_init()
{
MPI::InitializeIntercepts();
}
inline void
MPI::Init(int& argc, char**& argv)
{
(void)MPI_Init(&argc, &argv);
Real_init();
}
inline void
MPI::Init()
{
(void)MPI_Init(0, 0);
Real_init();
}
inline void
MPI::Finalize()
{
(void)MPI_Finalize();
}
inline bool
MPI::Is_initialized()
{
int t;
(void)MPI_Initialized(&t);
return OPAL_INT_TO_BOOL(t);
}
inline bool
MPI::Is_finalized()
{
int t;
(void)MPI_Finalized(&t);
return OPAL_INT_TO_BOOL(t);
}
//
// External Interfaces
//
inline int
MPI::Init_thread(int required)
{
int provided;
(void) MPI_Init_thread(0, NULL, required, &provided);
Real_init();
return provided;
}
inline int
MPI::Init_thread(int& argc, char**& argv, int required)
{
int provided;
(void) MPI_Init_thread(&argc, &argv, required, &provided);
Real_init();
return provided;
}
inline bool
MPI::Is_thread_main()
{
int flag;
(void) MPI_Is_thread_main(&flag);
return OPAL_INT_TO_BOOL(flag == 1);
}
inline int
MPI::Query_thread()
{
int provided;
(void) MPI_Query_thread(&provided);
return provided;
}
//
// Miscellany
//
inline void*
MPI::Alloc_mem(MPI::Aint size, const MPI::Info& info)
{
void* baseptr;
(void) MPI_Alloc_mem(size, info, &baseptr);
return baseptr;
}
inline void
MPI::Free_mem(void* base)
{
(void) MPI_Free_mem(base);
}
//
// Process Creation
//
inline void
MPI::Close_port(const char* port_name)
{
(void) MPI_Close_port(const_cast<char *>(port_name));
}
inline void
MPI::Lookup_name(const char * service_name,
const MPI::Info& info,
char* port_name)
{
(void) MPI_Lookup_name(const_cast<char *>(service_name), info, port_name);
}
inline void
MPI::Open_port(const MPI::Info& info, char* port_name)
{
(void) MPI_Open_port(info, port_name);
}
inline void
MPI::Publish_name(const char* service_name,
const MPI::Info& info,
const char* port_name)
{
(void) MPI_Publish_name(const_cast<char *>(service_name), info,
const_cast<char *>(port_name));
}
inline void
MPI::Unpublish_name(const char* service_name,
const MPI::Info& info,
const char* port_name)
{
(void)MPI_Unpublish_name(const_cast<char *>(service_name), info,
const_cast<char *>(port_name));
}
//
// Profiling
//
inline void
MPI::Pcontrol(const int level, ...)
{
va_list ap;
va_start(ap, level);
(void)MPI_Pcontrol(level, ap);
va_end(ap);
}
inline void
MPI::Get_version(int& version, int& subversion)
{
(void)MPI_Get_version(&version, &subversion);
}
inline MPI::Aint
MPI::Get_address(void* location)
{
MPI::Aint ret;
MPI_Get_address(location, &ret);
return ret;
}

Просмотреть файл

@ -1,124 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Group {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Group;
#endif
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction
inline Group() { }
inline Group(MPI_Group i) : pmpi_group(i) { }
// copy
inline Group(const Group& g) : pmpi_group(g.pmpi_group) { }
inline Group(const PMPI::Group& g) : pmpi_group(g) { }
inline virtual ~Group() {}
Group& operator=(const Group& g) {
pmpi_group = g.pmpi_group; return *this;
}
// comparison
inline bool operator== (const Group &a) {
return (bool)(pmpi_group == a.pmpi_group);
}
inline bool operator!= (const Group &a) {
return (bool)!(*this == a);
}
// inter-language operability
Group& operator= (const MPI_Group &i) { pmpi_group = i; return *this; }
inline operator MPI_Group () const { return pmpi_group.mpi(); }
// inline operator MPI_Group* () const { return pmpi_group; }
inline operator const PMPI::Group&() const { return pmpi_group; }
const PMPI::Group& pmpi() { return pmpi_group; }
#else
// construction
inline Group() : mpi_group(MPI_GROUP_NULL) { }
inline Group(MPI_Group i) : mpi_group(i) { }
// copy
inline Group(const Group& g) : mpi_group(g.mpi_group) { }
inline virtual ~Group() {}
inline Group& operator=(const Group& g) { mpi_group = g.mpi_group; return *this; }
// comparison
inline bool operator== (const Group &a) { return (bool)(mpi_group == a.mpi_group); }
inline bool operator!= (const Group &a) { return (bool)!(*this == a); }
// inter-language operability
inline Group& operator= (const MPI_Group &i) { mpi_group = i; return *this; }
inline operator MPI_Group () const { return mpi_group; }
// inline operator MPI_Group* () const { return (MPI_Group*)&mpi_group; }
inline MPI_Group mpi() const { return mpi_group; }
#endif
//
// Groups, Contexts, and Communicators
//
virtual int Get_size() const;
virtual int Get_rank() const;
static void Translate_ranks (const Group& group1, int n, const int ranks1[],
const Group& group2, int ranks2[]);
static int Compare(const Group& group1, const Group& group2);
static Group Union(const Group &group1, const Group &group2);
static Group Intersect(const Group &group1, const Group &group2);
static Group Difference(const Group &group1, const Group &group2);
virtual Group Incl(int n, const int ranks[]) const;
virtual Group Excl(int n, const int ranks[]) const;
virtual Group Range_incl(int n, const int ranges[][3]) const;
virtual Group Range_excl(int n, const int ranges[][3]) const;
virtual void Free();
protected:
#if ! 0 /* OMPI_ENABLE_MPI_PROFILING */
MPI_Group mpi_group;
#endif
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Group pmpi_group;
#endif
};

Просмотреть файл

@ -1,129 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2016 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Groups, Contexts, and Communicators
//
inline int
MPI::Group::Get_size() const
{
int size;
(void)MPI_Group_size(mpi_group, &size);
return size;
}
inline int
MPI::Group::Get_rank() const
{
int myrank;
(void)MPI_Group_rank(mpi_group, &myrank);
return myrank;
}
inline void
MPI::Group::Translate_ranks (const MPI::Group& group1, int n,
const int ranks1[],
const MPI::Group& group2, int ranks2[])
{
(void)MPI_Group_translate_ranks(group1, n, const_cast<int *>(ranks1), group2, const_cast<int *>(ranks2));
}
inline int
MPI::Group::Compare(const MPI::Group& group1, const MPI::Group& group2)
{
int result;
(void)MPI_Group_compare(group1, group2, &result);
return result;
}
inline MPI::Group
MPI::Group::Union(const MPI::Group &group1, const MPI::Group &group2)
{
MPI_Group newgroup;
(void)MPI_Group_union(group1, group2, &newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Intersect(const MPI::Group &group1, const MPI::Group &group2)
{
MPI_Group newgroup;
(void)MPI_Group_intersection( group1, group2, &newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Difference(const MPI::Group &group1, const MPI::Group &group2)
{
MPI_Group newgroup;
(void)MPI_Group_difference(group1, group2, &newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Incl(int n, const int ranks[]) const
{
MPI_Group newgroup;
(void)MPI_Group_incl(mpi_group, n, const_cast<int *>(ranks), &newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Excl(int n, const int ranks[]) const
{
MPI_Group newgroup;
(void)MPI_Group_excl(mpi_group, n, const_cast<int *>(ranks), &newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Range_incl(int n, const int ranges[][3]) const
{
MPI_Group newgroup;
(void)MPI_Group_range_incl(mpi_group, n,
#if OMPI_CXX_SUPPORTS_2D_CONST_CAST
const_cast<int(*)[3]>(ranges),
#else
(int(*)[3]) ranges,
#endif
&newgroup);
return newgroup;
}
inline MPI::Group
MPI::Group::Range_excl(int n, const int ranges[][3]) const
{
MPI_Group newgroup;
(void)MPI_Group_range_excl(mpi_group, n,
#if OMPI_CXX_SUPPORTS_2D_CONST_CAST
const_cast<int(*)[3]>(ranges),
#else
(int(*)[3]) ranges,
#endif
&newgroup);
return newgroup;
}
inline void
MPI::Group::Free()
{
(void)MPI_Group_free(&mpi_group);
}

Просмотреть файл

@ -1,103 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Info {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Info;
#endif
friend class MPI::Comm; //so I can access pmpi_info data member in comm.cc
friend class MPI::Request; //and also from request.cc
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction / destruction
Info() { }
virtual ~Info() {}
// copy / assignment
Info(const Info& data) : pmpi_info(data.pmpi_info) { }
Info(MPI_Info i) : pmpi_info(i) { }
Info& operator=(const Info& data) {
pmpi_info = data.pmpi_info; return *this; }
// comparison, don't need for info
// inter-language operability
Info& operator= (const MPI_Info &i) {
pmpi_info = i; return *this; }
operator MPI_Info () const { return pmpi_info; }
// operator MPI_Info* () const { return pmpi_info; }
operator const PMPI::Info&() const { return pmpi_info; }
#else
Info() : mpi_info(MPI_INFO_NULL) { }
// copy
Info(const Info& data) : mpi_info(data.mpi_info) { }
Info(MPI_Info i) : mpi_info(i) { }
virtual ~Info() {}
Info& operator=(const Info& data) {
mpi_info = data.mpi_info; return *this; }
// comparison, don't need for info
// inter-language operability
Info& operator= (const MPI_Info &i) {
mpi_info = i; return *this; }
operator MPI_Info () const { return mpi_info; }
// operator MPI_Info* () const { return (MPI_Info*)&mpi_info; }
#endif
static Info Create();
virtual void Delete(const char* key);
virtual Info Dup() const;
virtual void Free();
virtual bool Get(const char* key, int valuelen, char* value) const;
virtual int Get_nkeys() const;
virtual void Get_nthkey(int n, char* key) const;
virtual bool Get_valuelen(const char* key, int& valuelen) const;
virtual void Set(const char* key, const char* value);
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Info pmpi_info;
#else
MPI_Info mpi_info;
#endif
};

Просмотреть файл

@ -1,83 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
inline MPI::Info
MPI::Info::Create()
{
MPI_Info newinfo;
(void) MPI_Info_create(&newinfo);
return newinfo;
}
inline void
MPI::Info::Delete(const char* key)
{
(void)MPI_Info_delete(mpi_info, const_cast<char *>(key));
}
inline MPI::Info
MPI::Info::Dup() const
{
MPI_Info newinfo;
(void)MPI_Info_dup(mpi_info, &newinfo);
return newinfo;
}
inline void
MPI::Info::Free()
{
(void) MPI_Info_free(&mpi_info);
}
inline bool
MPI::Info::Get(const char* key, int valuelen, char* value) const
{
int flag;
(void)MPI_Info_get(mpi_info, const_cast<char *>(key), valuelen, value, &flag);
return OPAL_INT_TO_BOOL(flag);
}
inline int
MPI::Info::Get_nkeys() const
{
int nkeys;
MPI_Info_get_nkeys(mpi_info, &nkeys);
return nkeys;
}
inline void
MPI::Info::Get_nthkey(int n, char* key) const
{
(void) MPI_Info_get_nthkey(mpi_info, n, key);
}
inline bool
MPI::Info::Get_valuelen(const char* key, int& valuelen) const
{
int flag;
(void) MPI_Info_get_valuelen(mpi_info, const_cast<char *>(key), &valuelen, &flag);
return OPAL_INT_TO_BOOL(flag);
}
inline void
MPI::Info::Set(const char* key, const char* value)
{
(void) MPI_Info_set(mpi_info, const_cast<char *>(key), const_cast<char *>(value));
}

Просмотреть файл

@ -1,511 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2009 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2016 Los Alamos National Security, LLC. All rights
// reserved.
// Copyright (c) 2017 Research Organization for Information Science
// and Technology (RIST). All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
#include "mpicxx.h"
#include <cstdio>
#include "ompi_config.h"
#include "cxx_glue.h"
extern "C"
void ompi_mpi_cxx_throw_exception(int *errcode)
{
#if OMPI_HAVE_CXX_EXCEPTION_SUPPORT
throw(MPI::Exception(*errcode));
#else
// Ick. This is really ugly, but necesary if someone uses a C compiler
// and -lmpi++ (which can legally happen in the LAM MPI implementation,
// and probably in MPICH and others who include -lmpi++ by default in their
// wrapper compilers)
fprintf(stderr, "MPI 2 C++ exception throwing is disabled, MPI::mpi_errno has the error code\n");
MPI::mpi_errno = *errcode;
#endif
}
extern "C"
void ompi_mpi_cxx_comm_throw_excptn_fctn(MPI_Comm *, int *errcode, ...)
{
/* Portland compiler raises a warning if va_start is not used in a
* variable argument function */
va_list ap;
va_start(ap, errcode);
ompi_mpi_cxx_throw_exception(errcode);
va_end(ap);
}
extern "C"
void ompi_mpi_cxx_file_throw_excptn_fctn(MPI_File *, int *errcode, ...)
{
va_list ap;
va_start(ap, errcode);
ompi_mpi_cxx_throw_exception(errcode);
va_end(ap);
}
extern "C"
void ompi_mpi_cxx_win_throw_excptn_fctn(MPI_Win *, int *errcode, ...)
{
va_list ap;
va_start(ap, errcode);
ompi_mpi_cxx_throw_exception(errcode);
va_end(ap);
}
void
MPI::InitializeIntercepts()
{
ompi_cxx_errhandler_set_callbacks ((struct ompi_errhandler_t *) &ompi_mpi_errors_throw_exceptions,
ompi_mpi_cxx_comm_throw_excptn_fctn,
ompi_mpi_cxx_file_throw_excptn_fctn,
ompi_mpi_cxx_win_throw_excptn_fctn);
}
// This function uses OMPI types, and is invoked with C linkage for
// the express purpose of having a C++ entity call back the C++
// function (so that types can be converted, etc.).
extern "C"
void ompi_mpi_cxx_comm_errhandler_invoke(MPI_Comm *c_comm, int *err,
const char *message, void *comm_fn)
{
// MPI::Comm is an abstract base class; can't instantiate one of
// those. So fake it by instantiating an MPI::Intracomm and then
// casting it down to an (MPI::Comm&) when invoking the callback.
MPI::Intracomm cxx_comm(*c_comm);
MPI::Comm::Errhandler_function *cxx_fn =
(MPI::Comm::Errhandler_function*) comm_fn;
cxx_fn((MPI::Comm&) cxx_comm, err, message);
}
// This function uses OMPI types, and is invoked with C linkage for
// the express purpose of having a C++ entity call back the C++
// function (so that types can be converted, etc.).
extern "C"
void ompi_mpi_cxx_file_errhandler_invoke(MPI_File *c_file, int *err,
const char *message, void *file_fn)
{
MPI::File cxx_file(*c_file);
MPI::File::Errhandler_function *cxx_fn =
(MPI::File::Errhandler_function*) file_fn;
cxx_fn(cxx_file, err, message);
}
// This function uses OMPI types, and is invoked with C linkage for
// the express purpose of having a C++ entity call back the C++
// function (so that types can be converted, etc.).
extern "C"
void ompi_mpi_cxx_win_errhandler_invoke(MPI_Win *c_win, int *err,
const char *message, void *win_fn)
{
MPI::Win cxx_win(*c_win);
MPI::Win::Errhandler_function *cxx_fn =
(MPI::Win::Errhandler_function*) win_fn;
cxx_fn(cxx_win, err, message);
}
// This is a bit weird; bear with me. The user-supplied function for
// MPI::Op contains a C++ object reference. So it must be called from
// a C++-compiled function. However, libmpi does not contain any C++
// code because there are portability and bootstrapping issues
// involved if someone tries to make a 100% C application link against
// a libmpi that contains C++ code. At a minimum, the user will have
// to use the C++ compiler to link. LA-MPI has shown that users don't
// want to do this (there are other problems, but this one is easy to
// cite).
//
// Hence, there are two problems when trying to invoke the user's
// callback funcion from an MPI::Op:
//
// 1. The MPI_Datatype that the C library has must be converted to an
// (MPI::Datatype)
// 2. The C++ callback function must then be called with a
// (MPI::Datatype&)
//
// Some relevant facts for the discussion:
//
// - The main engine for invoking Op callback functions is in libmpi
// (i.e., in C code).
//
// - The C++ bindings are a thin layer on top of the C bindings.
//
// - The C++ bindings are a separate library from the C bindings
// (libmpi_cxx.la).
//
// - As a direct result, the mpiCC wrapper compiler must generate a
// link order thus: "... -lmpi_cxx -lmpi ...", meaning that we cannot
// have a direct function call from the libmpi to libmpi_cxx. We can
// only do it by function pointer.
//
// So the problem remains -- how to invoke a C++ MPI::Op callback
// function (which only occurrs for user-defined datatypes, BTW) from
// within the C Op callback engine in libmpi?
//
// It is easy to cache a function pointer to the
// ompi_mpi_cxx_op_intercept() function on the MPI_Op (that is located
// in the libmpi_cxx library, and is therefore compiled with a C++
// compiler). But the normal C callback MPI_User_function type
// signature is (void*, void*, int*, MPI_Datatype*) -- so if
// ompi_mpi_cxx_op_intercept() is invoked with these arguments, it has
// no way to deduce what the user-specified callback function is that
// is associated with the MPI::Op.
//
// One can easily imagine a scenario of caching the callback pointer
// of the current MPI::Op in a global variable somewhere, and when
// ompi_mpi_cxx_op_intercept() is invoked, simply use that global
// variable. This is unfortunately not thread safe.
//
// So what we do is as follows:
//
// 1. The C++ dispatch function ompi_mpi_cxx_op_intercept() is *not*
// of type (MPI_User_function*). More specifically, it takes an
// additional argument: a function pointer. its signature is (void*,
// void*, int*, MPI_Datatype*, MPI_Op*, MPI::User_function*). This
// last argument is the function pointer of the user callback function
// to be invoked.
//
// The careful reader will notice that it is impossible for the C Op
// dispatch code in libmpi to call this function properly because the
// last argument is of a type that is not defined in libmpi (i.e.,
// it's only in libmpi_cxx). Keep reading -- this is explained below.
//
// 2. When the MPI::Op is created (in MPI::Op::Init()), we call the
// back-end C MPI_Op_create() function as normal (just like the F77
// bindings, in fact), and pass it the ompi_mpi_cxx_op_intercept()
// function (casting it to (MPI_User_function*) -- it's a function
// pointer, so its size is guaranteed to be the same, even if the
// signature of the real function is different).
//
// 3. The function pointer to ompi_mpi_cxx_op_intercept() will be
// cached in the MPI_Op in op->o_func[0].cxx_intercept_fn.
//
// Recall that MPI_Op is implemented to have an array of function
// pointers so that optimized versions of reduction operations can be
// invoked based on the corresponding datatype. But when an MPI_Op
// represents a user-defined function operation, there is only one
// function, so it is always stored in function pointer array index 0.
//
// 4. When MPI_Op_create() returns, the C++ MPI::Op::Init function
// manually sets OMPI_OP_FLAGS_CXX_FUNC flag on the resulting MPI_Op
// (again, very similar to the F77 MPI_OP_CREATE wrapper). It also
// caches the user's C++ callback function in op->o_func[1].c_fn
// (recall that the array of function pointers is actually a union of
// multiple different function pointer types -- it doesn't matter
// which type the user's callback function pointer is stored in; since
// all the types in the union are function pointers, it's guaranteed
// to be large enough to hold what we need.
//
// Note that we don't have a member of the union for the C++ callback
// function because its signature includes a (MPI::Datatype&), which
// we can't put in the C library libmpi.
//
// 5. When the user invokes an function that uses the MPI::Op (or,
// more specifically, when the Op dispatch engine in ompi/op/op.c [in
// libmpi] tries to dispatch off to it), it will see the
// OMPI_OP_FLAGS_CXX_FUNC flag and know to use the
// op->o_func[0].cxx_intercept_fn and also pass as the 4th argument,
// op->o_func[1].c_fn.
//
// 6. ompi_mpi_cxx_op_intercept() is therefore invoked and receives
// both the (MPI_Datatype*) (which is easy to convert to
// (MPI::Datatype&)) and a pointer to the user's C++ callback function
// (albiet cast as the wrong type). So it casts the callback function
// pointer to (MPI::User_function*) and invokes it.
//
// Wasn't that simple?
//
extern "C" void
ompi_mpi_cxx_op_intercept(void *invec, void *outvec, int *len,
MPI_Datatype *datatype, MPI_User_function *c_fn)
{
MPI::Datatype cxx_datatype = *datatype;
MPI::User_function *cxx_callback = (MPI::User_function*) c_fn;
cxx_callback(invec, outvec, *len, cxx_datatype);
}
//
// Attribute copy functions -- comm, type, and win
//
extern "C" int
ompi_mpi_cxx_comm_copy_attr_intercept(MPI_Comm comm, int keyval,
void *extra_state,
void *attribute_val_in,
void *attribute_val_out, int *flag,
MPI_Comm newcomm)
{
int ret = 0;
MPI::Comm::keyval_intercept_data_t *kid =
(MPI::Comm::keyval_intercept_data_t*) extra_state;
// The callback may be in C or C++. If it's in C, it's easy - just
// call it with no extra C++ machinery.
if (NULL != kid->c_copy_fn) {
return kid->c_copy_fn(comm, keyval, kid->extra_state, attribute_val_in,
attribute_val_out, flag);
}
// If the callback was C++, we have to do a little more work
MPI::Intracomm intracomm;
MPI::Intercomm intercomm;
MPI::Graphcomm graphcomm;
MPI::Cartcomm cartcomm;
bool bflag = OPAL_INT_TO_BOOL(*flag);
if (NULL != kid->cxx_copy_fn) {
ompi_cxx_communicator_type_t comm_type =
ompi_cxx_comm_get_type (comm);
switch (comm_type) {
case OMPI_CXX_COMM_TYPE_GRAPH:
graphcomm = MPI::Graphcomm(comm);
ret = kid->cxx_copy_fn(graphcomm, keyval, kid->extra_state,
attribute_val_in, attribute_val_out,
bflag);
break;
case OMPI_CXX_COMM_TYPE_CART:
cartcomm = MPI::Cartcomm(comm);
ret = kid->cxx_copy_fn(cartcomm, keyval, kid->extra_state,
attribute_val_in, attribute_val_out,
bflag);
break;
case OMPI_CXX_COMM_TYPE_INTRACOMM:
intracomm = MPI::Intracomm(comm);
ret = kid->cxx_copy_fn(intracomm, keyval, kid->extra_state,
attribute_val_in, attribute_val_out,
bflag);
break;
case OMPI_CXX_COMM_TYPE_INTERCOMM:
intercomm = MPI::Intercomm(comm);
ret = kid->cxx_copy_fn(intercomm, keyval, kid->extra_state,
attribute_val_in, attribute_val_out,
bflag);
break;
default:
ret = MPI::ERR_COMM;
}
} else {
ret = MPI::ERR_OTHER;
}
*flag = (int)bflag;
return ret;
}
extern "C" int
ompi_mpi_cxx_comm_delete_attr_intercept(MPI_Comm comm, int keyval,
void *attribute_val, void *extra_state)
{
int ret = 0;
MPI::Comm::keyval_intercept_data_t *kid =
(MPI::Comm::keyval_intercept_data_t*) extra_state;
// The callback may be in C or C++. If it's in C, it's easy - just
// call it with no extra C++ machinery.
if (NULL != kid->c_delete_fn) {
return kid->c_delete_fn(comm, keyval, attribute_val, kid->extra_state);
}
// If the callback was C++, we have to do a little more work
MPI::Intracomm intracomm;
MPI::Intercomm intercomm;
MPI::Graphcomm graphcomm;
MPI::Cartcomm cartcomm;
if (NULL != kid->cxx_delete_fn) {
ompi_cxx_communicator_type_t comm_type =
ompi_cxx_comm_get_type (comm);
switch (comm_type) {
case OMPI_CXX_COMM_TYPE_GRAPH:
graphcomm = MPI::Graphcomm(comm);
ret = kid->cxx_delete_fn(graphcomm, keyval, attribute_val,
kid->extra_state);
break;
case OMPI_CXX_COMM_TYPE_CART:
cartcomm = MPI::Cartcomm(comm);
ret = kid->cxx_delete_fn(cartcomm, keyval, attribute_val,
kid->extra_state);
break;
case OMPI_CXX_COMM_TYPE_INTRACOMM:
intracomm = MPI::Intracomm(comm);
ret = kid->cxx_delete_fn(intracomm, keyval, attribute_val,
kid->extra_state);
break;
case OMPI_CXX_COMM_TYPE_INTERCOMM:
intercomm = MPI::Intercomm(comm);
ret = kid->cxx_delete_fn(intercomm, keyval, attribute_val,
kid->extra_state);
break;
default:
ret = MPI::ERR_COMM;
}
} else {
ret = MPI::ERR_OTHER;
}
return ret;
}
extern "C" int
ompi_mpi_cxx_type_copy_attr_intercept(MPI_Datatype oldtype, int keyval,
void *extra_state, void *attribute_val_in,
void *attribute_val_out, int *flag)
{
int ret = 0;
MPI::Datatype::keyval_intercept_data_t *kid =
(MPI::Datatype::keyval_intercept_data_t*) extra_state;
if (NULL != kid->c_copy_fn) {
// The callback may be in C or C++. If it's in C, it's easy - just
// call it with no extra C++ machinery.
ret = kid->c_copy_fn(oldtype, keyval, kid->extra_state, attribute_val_in,
attribute_val_out, flag);
} else if (NULL != kid->cxx_copy_fn) {
// If the callback was C++, we have to do a little more work
bool bflag = OPAL_INT_TO_BOOL(*flag);
MPI::Datatype cxx_datatype(oldtype);
ret = kid->cxx_copy_fn(cxx_datatype, keyval, kid->extra_state,
attribute_val_in, attribute_val_out, bflag);
*flag = (int)bflag;
} else {
ret = MPI::ERR_TYPE;
}
return ret;
}
extern "C" int
ompi_mpi_cxx_type_delete_attr_intercept(MPI_Datatype type, int keyval,
void *attribute_val, void *extra_state)
{
int ret = 0;
MPI::Datatype::keyval_intercept_data_t *kid =
(MPI::Datatype::keyval_intercept_data_t*) extra_state;
if (NULL != kid->c_delete_fn) {
return kid->c_delete_fn(type, keyval, attribute_val, kid->extra_state);
} else if (NULL != kid->cxx_delete_fn) {
MPI::Datatype cxx_datatype(type);
return kid->cxx_delete_fn(cxx_datatype, keyval, attribute_val,
kid->extra_state);
} else {
ret = MPI::ERR_TYPE;
}
return ret;
}
extern "C" int
ompi_mpi_cxx_win_copy_attr_intercept(MPI_Win oldwin, int keyval,
void *extra_state, void *attribute_val_in,
void *attribute_val_out, int *flag)
{
int ret = 0;
MPI::Win::keyval_intercept_data_t *kid =
(MPI::Win::keyval_intercept_data_t*) extra_state;
if (NULL != kid->c_copy_fn) {
// The callback may be in C or C++. If it's in C, it's easy - just
// call it with no extra C++ machinery.
ret = kid->c_copy_fn(oldwin, keyval, kid->extra_state, attribute_val_in,
attribute_val_out, flag);
} else if (NULL != kid->cxx_copy_fn) {
// If the callback was C++, we have to do a little more work
bool bflag = OPAL_INT_TO_BOOL(*flag);
MPI::Win cxx_win(oldwin);
ret = kid->cxx_copy_fn(cxx_win, keyval, kid->extra_state,
attribute_val_in, attribute_val_out, bflag);
*flag = (int)bflag;
} else {
ret = MPI::ERR_WIN;
}
return ret;
}
extern "C" int
ompi_mpi_cxx_win_delete_attr_intercept(MPI_Win win, int keyval,
void *attribute_val, void *extra_state)
{
int ret = 0;
MPI::Win::keyval_intercept_data_t *kid =
(MPI::Win::keyval_intercept_data_t*) extra_state;
if (NULL != kid->c_delete_fn) {
return kid->c_delete_fn(win, keyval, attribute_val, kid->extra_state);
} else if (NULL != kid->cxx_delete_fn) {
MPI::Win cxx_win(win);
return kid->cxx_delete_fn(cxx_win, keyval, attribute_val,
kid->extra_state);
} else {
ret = MPI::ERR_WIN;
}
return ret;
}
// For similar reasons as above, we need to intercept calls for the 3
// generalized request callbacks (convert arguments to C++ types and
// invoke the C++ callback signature).
extern "C" int
ompi_mpi_cxx_grequest_query_fn_intercept(void *state, MPI_Status *status)
{
MPI::Grequest::Intercept_data_t *data =
(MPI::Grequest::Intercept_data_t *) state;
MPI::Status s(*status);
int ret = data->id_cxx_query_fn(data->id_extra, s);
*status = s;
return ret;
}
extern "C" int
ompi_mpi_cxx_grequest_free_fn_intercept(void *state)
{
MPI::Grequest::Intercept_data_t *data =
(MPI::Grequest::Intercept_data_t *) state;
int ret = data->id_cxx_free_fn(data->id_extra);
// Delete the struct that was "new"ed in MPI::Grequest::Start()
delete data;
return ret;
}
extern "C" int
ompi_mpi_cxx_grequest_cancel_fn_intercept(void *state, int cancelled)
{
MPI::Grequest::Intercept_data_t *data =
(MPI::Grequest::Intercept_data_t *) state;
return data->id_cxx_cancel_fn(data->id_extra,
(0 != cancelled ? true : false));
}

Просмотреть файл

@ -1,87 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Intercomm : public Comm {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Intercomm;
#endif
public:
// construction
Intercomm() : Comm(MPI_COMM_NULL) { }
// copy
Intercomm(const Comm_Null& data) : Comm(data) { }
// inter-language operability
Intercomm(MPI_Comm data) : Comm(data) { }
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// copy
Intercomm(const Intercomm& data) : Comm(data), pmpi_comm(data.pmpi_comm) { }
Intercomm(const PMPI::Intercomm& d) :
Comm((const PMPI::Comm&)d), pmpi_comm(d) { }
// assignment
Intercomm& operator=(const Intercomm& data) {
Comm::operator=(data);
pmpi_comm = data.pmpi_comm; return *this; }
Intercomm& operator=(const Comm_Null& data) {
Comm::operator=(data);
Intercomm& ic = (Intercomm&)data;
pmpi_comm = ic.pmpi_comm; return *this; }
// inter-language operability
Intercomm& operator=(const MPI_Comm& data) {
Comm::operator=(data);
pmpi_comm = PMPI::Intercomm(data); return *this; }
#else
// copy
Intercomm(const Intercomm& data) : Comm(data.mpi_comm) { }
// assignment
Intercomm& operator=(const Intercomm& data) {
mpi_comm = data.mpi_comm; return *this; }
Intercomm& operator=(const Comm_Null& data) {
mpi_comm = data; return *this; }
// inter-language operability
Intercomm& operator=(const MPI_Comm& data) {
mpi_comm = data; return *this; }
#endif
//
// Groups, Contexts, and Communicators
//
Intercomm Dup() const;
virtual Intercomm& Clone() const;
virtual int Get_remote_size() const;
virtual Group Get_remote_group() const;
virtual Intracomm Merge(bool high) const;
virtual Intercomm Create(const Group& group) const;
virtual Intercomm Split(int color, int key) const;
};

Просмотреть файл

@ -1,81 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
inline MPI::Intercomm
MPI::Intercomm::Dup() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
return newcomm;
}
inline MPI::Intercomm&
MPI::Intercomm::Clone() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
MPI::Intercomm* dup = new MPI::Intercomm(newcomm);
return *dup;
}
inline int
MPI::Intercomm::Get_remote_size() const
{
int size;
(void)MPI_Comm_remote_size(mpi_comm, &size);
return size;
}
inline MPI::Group
MPI::Intercomm::Get_remote_group() const
{
MPI_Group group;
(void)MPI_Comm_remote_group(mpi_comm, &group);
return group;
}
inline MPI::Intracomm
MPI::Intercomm::Merge(bool high) const
{
MPI_Comm newcomm;
(void)MPI_Intercomm_merge(mpi_comm, (int)high, &newcomm);
return newcomm;
}
//
// Extended Collective Operations
//
inline MPI::Intercomm
MPI::Intercomm::Create(const Group& group) const
{
MPI_Comm newcomm;
(void) MPI_Comm_create(mpi_comm, (MPI_Group) group, &newcomm);
return newcomm;
}
inline MPI::Intercomm
MPI::Intercomm::Split(int color, int key) const
{
MPI_Comm newcomm;
(void) MPI_Comm_split(mpi_comm, color, key, &newcomm);
return newcomm;
}

Просмотреть файл

@ -1,166 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Intracomm : public Comm {
public:
// construction
Intracomm() { }
// copy
Intracomm(const Comm_Null& data) : Comm(data) { }
// inter-language operability
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
//NOTE: it is extremely important that Comm(data) happens below
// because there is a not only pmpi_comm in this Intracomm but
// there is also a pmpi_comm in the inherited Comm part. Both
// of these pmpi_comm's need to be initialized with the same
// MPI_Comm object. Also the assignment operators must take this
// into account.
Intracomm(const Intracomm& data) : Comm(data), pmpi_comm(data) { }
Intracomm(MPI_Comm data) : Comm(data), pmpi_comm(data) { }
Intracomm(const PMPI::Intracomm& data)
: Comm((const PMPI::Comm&)data), pmpi_comm(data) { }
// assignment
Intracomm& operator=(const Intracomm& data) {
Comm::operator=(data);
pmpi_comm = data.pmpi_comm;
return *this;
}
Intracomm& operator=(const Comm_Null& data) {
Comm::operator=(data);
pmpi_comm = (PMPI::Intracomm)data; return *this;
}
// inter-language operability
Intracomm& operator=(const MPI_Comm& data) {
Comm::operator=(data);
pmpi_comm = data;
return *this;
}
#else
Intracomm(const Intracomm& data) : Comm(data.mpi_comm) { }
inline Intracomm(MPI_Comm data);
// assignment
Intracomm& operator=(const Intracomm& data) {
mpi_comm = data.mpi_comm; return *this;
}
Intracomm& operator=(const Comm_Null& data) {
mpi_comm = data; return *this;
}
// inter-language operability
Intracomm& operator=(const MPI_Comm& data) {
mpi_comm = data; return *this; }
#endif
//
// Collective Communication
//
// All the rest are up in comm.h -- Scan and Exscan are not defined
// in intercomm's, so they're down here in Intracomm.
//
virtual void
Scan(const void *sendbuf, void *recvbuf, int count,
const Datatype & datatype, const Op & op) const;
virtual void
Exscan(const void *sendbuf, void *recvbuf, int count,
const Datatype & datatype, const Op & op) const;
//
// Communicator maintenance
//
Intracomm Dup() const;
virtual Intracomm& Clone() const;
virtual Intracomm
Create(const Group& group) const;
virtual Intracomm
Split(int color, int key) const;
virtual Intercomm
Create_intercomm(int local_leader, const Comm& peer_comm,
int remote_leader, int tag) const;
virtual Cartcomm
Create_cart(int ndims, const int dims[],
const bool periods[], bool reorder) const;
virtual Graphcomm
Create_graph(int nnodes, const int index[],
const int edges[], bool reorder) const;
//
// Process Creation and Management
//
virtual Intercomm Accept(const char* port_name, const Info& info, int root)
const;
virtual Intercomm Connect(const char* port_name, const Info& info, int root)
const;
virtual Intercomm Spawn(const char* command, const char* argv[],
int maxprocs, const Info& info, int root) const;
virtual Intercomm Spawn(const char* command, const char* argv[],
int maxprocs, const Info& info,
int root, int array_of_errcodes[]) const;
virtual Intercomm Spawn_multiple(int count, const char* array_of_commands[],
const char** array_of_argv[],
const int array_of_maxprocs[],
const Info array_of_info[], int root);
virtual Intercomm Spawn_multiple(int count, const char* array_of_commands[],
const char** array_of_argv[],
const int array_of_maxprocs[],
const Info array_of_info[], int root,
int array_of_errcodes[]);
//#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// virtual const PMPI::Comm& get_pmpi_comm() const { return pmpi_comm; }
//#endif
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Intracomm pmpi_comm;
#endif
// Convert an array of p_nbr Info object into an array of MPI_Info.
// A pointer to the allocated array is returned and must be
// eventually deleted.
static inline MPI_Info *convert_info_to_mpi_info(int p_nbr,
const Info p_info_tbl[]);
};

Просмотреть файл

@ -1,240 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
inline
MPI::Intracomm::Intracomm(MPI_Comm data) {
int flag = 0;
if (MPI::Is_initialized() && (data != MPI_COMM_NULL)) {
(void)MPI_Comm_test_inter(data, &flag);
if (flag) {
mpi_comm = MPI_COMM_NULL;
} else {
mpi_comm = data;
}
}
else {
mpi_comm = data;
}
}
inline void
MPI::Intracomm::Scan(const void *sendbuf, void *recvbuf, int count,
const MPI::Datatype & datatype, const MPI::Op& op) const
{
(void)MPI_Scan(const_cast<void *>(sendbuf), recvbuf, count, datatype, op, mpi_comm);
}
inline void
MPI::Intracomm::Exscan(const void *sendbuf, void *recvbuf, int count,
const MPI::Datatype & datatype,
const MPI::Op& op) const
{
(void)MPI_Exscan(const_cast<void *>(sendbuf), recvbuf, count, datatype, op, mpi_comm);
}
inline MPI::Intracomm
MPI::Intracomm::Dup() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
return newcomm;
}
inline MPI::Intracomm&
MPI::Intracomm::Clone() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
MPI::Intracomm* dup = new MPI::Intracomm(newcomm);
return *dup;
}
inline MPI::Intracomm
MPI::Intracomm::Create(const MPI::Group& group) const
{
MPI_Comm newcomm;
(void)MPI_Comm_create(mpi_comm, group, &newcomm);
return newcomm;
}
inline MPI::Intracomm
MPI::Intracomm::Split(int color, int key) const
{
MPI_Comm newcomm;
(void)MPI_Comm_split(mpi_comm, color, key, &newcomm);
return newcomm;
}
inline MPI::Intercomm
MPI::Intracomm::Create_intercomm(int local_leader,
const MPI::Comm& peer_comm,
int remote_leader, int tag) const
{
MPI_Comm newintercomm;
(void)MPI_Intercomm_create(mpi_comm, local_leader, peer_comm,
remote_leader, tag, &newintercomm);
return newintercomm;
}
inline MPI::Cartcomm
MPI::Intracomm::Create_cart(int ndims, const int dims[],
const bool periods[], bool reorder) const
{
int *int_periods = new int [ndims];
for (int i=0; i<ndims; i++)
int_periods[i] = (int) periods[i];
MPI_Comm newcomm;
(void)MPI_Cart_create(mpi_comm, ndims, const_cast<int *>(dims),
int_periods, (int)reorder, &newcomm);
delete [] int_periods;
return newcomm;
}
inline MPI::Graphcomm
MPI::Intracomm::Create_graph(int nnodes, const int index[],
const int edges[], bool reorder) const
{
MPI_Comm newcomm;
(void)MPI_Graph_create(mpi_comm, nnodes, const_cast<int *>(index),
const_cast<int *>(edges), (int)reorder, &newcomm);
return newcomm;
}
//
// Process Creation and Management
//
inline MPI::Intercomm
MPI::Intracomm::Accept(const char* port_name,
const MPI::Info& info,
int root) const
{
MPI_Comm newcomm;
(void) MPI_Comm_accept(const_cast<char *>(port_name), info, root, mpi_comm,
&newcomm);
return newcomm;
}
inline MPI::Intercomm
MPI::Intracomm::Connect(const char* port_name,
const MPI::Info& info,
int root) const
{
MPI_Comm newcomm;
(void) MPI_Comm_connect(const_cast<char *>(port_name), info, root, mpi_comm,
&newcomm);
return newcomm;
}
inline MPI::Intercomm
MPI::Intracomm::Spawn(const char* command, const char* argv[],
int maxprocs, const MPI::Info& info,
int root) const
{
MPI_Comm newcomm;
(void) MPI_Comm_spawn(const_cast<char *>(command), const_cast<char **>(argv), maxprocs,
info, root, mpi_comm, &newcomm,
(int *)MPI_ERRCODES_IGNORE);
return newcomm;
}
inline MPI::Intercomm
MPI::Intracomm::Spawn(const char* command, const char* argv[],
int maxprocs, const MPI::Info& info,
int root, int array_of_errcodes[]) const
{
MPI_Comm newcomm;
(void) MPI_Comm_spawn(const_cast<char *>(command), const_cast<char **>(argv), maxprocs,
info, root, mpi_comm, &newcomm,
array_of_errcodes);
return newcomm;
}
inline MPI::Intercomm
MPI::Intracomm::Spawn_multiple(int count,
const char* array_of_commands[],
const char** array_of_argv[],
const int array_of_maxprocs[],
const Info array_of_info[], int root)
{
MPI_Comm newcomm;
MPI_Info *const array_of_mpi_info =
convert_info_to_mpi_info(count, array_of_info);
MPI_Comm_spawn_multiple(count, const_cast<char **>(array_of_commands),
const_cast<char ***>(array_of_argv),
const_cast<int *>(array_of_maxprocs),
array_of_mpi_info, root,
mpi_comm, &newcomm, (int *)MPI_ERRCODES_IGNORE);
delete[] array_of_mpi_info;
return newcomm;
}
inline MPI_Info *
MPI::Intracomm::convert_info_to_mpi_info(int p_nbr, const Info p_info_tbl[])
{
MPI_Info *const mpi_info_tbl = new MPI_Info [p_nbr];
for (int i_tbl=0; i_tbl < p_nbr; i_tbl++) {
mpi_info_tbl[i_tbl] = p_info_tbl[i_tbl];
}
return mpi_info_tbl;
}
inline MPI::Intercomm
MPI::Intracomm::Spawn_multiple(int count,
const char* array_of_commands[],
const char** array_of_argv[],
const int array_of_maxprocs[],
const Info array_of_info[], int root,
int array_of_errcodes[])
{
MPI_Comm newcomm;
MPI_Info *const array_of_mpi_info =
convert_info_to_mpi_info(count, array_of_info);
MPI_Comm_spawn_multiple(count, const_cast<char **>(array_of_commands),
const_cast<char ***>(array_of_argv),
const_cast<int *>(array_of_maxprocs),
array_of_mpi_info, root,
mpi_comm, &newcomm, array_of_errcodes);
delete[] array_of_mpi_info;
return newcomm;
}

Просмотреть файл

@ -1,172 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2007-2012 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// Copyright (c) 2017 Research Organization for Information Science
// and Technology (RIST). All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
#include "mpicxx.h"
/* Need to include ompi_config.h after mpicxx.h so that we get
SEEK_SET and friends right */
#include "ompi_config.h"
#include "cxx_glue.h"
#if OPAL_CXX_USE_PRAGMA_IDENT
#pragma ident OMPI_IDENT_STRING
#elif OPAL_CXX_USE_IDENT
#ident OMPI_IDENT_STRING
#endif
namespace MPI {
const char ompi_libcxx_version_string[] = OMPI_IDENT_STRING;
}
namespace MPI {
#if ! OMPI_HAVE_CXX_EXCEPTION_SUPPORT
int mpi_errno = MPI_SUCCESS;
#endif
void* const BOTTOM = (void*) MPI_BOTTOM;
void* const IN_PLACE = (void*) MPI_IN_PLACE;
// error-handling specifiers
const Errhandler ERRORS_ARE_FATAL((MPI_Errhandler)&(ompi_mpi_errors_are_fatal));
const Errhandler ERRORS_RETURN((MPI_Errhandler)&(ompi_mpi_errors_return));
const Errhandler ERRORS_THROW_EXCEPTIONS((MPI_Errhandler)&(ompi_mpi_errors_throw_exceptions));
// elementary datatypes
const Datatype CHAR(MPI_CHAR);
const Datatype SHORT(MPI_SHORT);
const Datatype INT(MPI_INT);
const Datatype LONG(MPI_LONG);
const Datatype SIGNED_CHAR(MPI_SIGNED_CHAR);
const Datatype UNSIGNED_CHAR(MPI_UNSIGNED_CHAR);
const Datatype UNSIGNED_SHORT(MPI_UNSIGNED_SHORT);
const Datatype UNSIGNED(MPI_UNSIGNED);
const Datatype UNSIGNED_LONG(MPI_UNSIGNED_LONG);
const Datatype FLOAT(MPI_FLOAT);
const Datatype DOUBLE(MPI_DOUBLE);
const Datatype LONG_DOUBLE(MPI_LONG_DOUBLE);
const Datatype BYTE(MPI_BYTE);
const Datatype PACKED(MPI_PACKED);
const Datatype WCHAR(MPI_WCHAR);
// datatypes for reductions functions (C / C++)
const Datatype FLOAT_INT(MPI_FLOAT_INT);
const Datatype DOUBLE_INT(MPI_DOUBLE_INT);
const Datatype LONG_INT(MPI_LONG_INT);
const Datatype TWOINT(MPI_2INT);
const Datatype SHORT_INT(MPI_SHORT_INT);
const Datatype LONG_DOUBLE_INT(MPI_LONG_DOUBLE_INT);
#if OMPI_BUILD_FORTRAN_BINDINGS
// elementary datatype (Fortran)
const Datatype REAL((MPI_Datatype)&(ompi_mpi_real));
const Datatype INTEGER((MPI_Datatype)&(ompi_mpi_integer));
const Datatype DOUBLE_PRECISION((MPI_Datatype)&(ompi_mpi_dblprec));
const Datatype F_COMPLEX((MPI_Datatype)&(ompi_mpi_cplex));
const Datatype LOGICAL((MPI_Datatype)&(ompi_mpi_logical));
const Datatype CHARACTER((MPI_Datatype)&(ompi_mpi_character));
// datatype for reduction functions (Fortran)
const Datatype TWOREAL((MPI_Datatype)&(ompi_mpi_2real));
const Datatype TWODOUBLE_PRECISION((MPI_Datatype)&(ompi_mpi_2dblprec));
const Datatype TWOINTEGER((MPI_Datatype)&(ompi_mpi_2integer));
// optional datatypes (Fortran)
const Datatype INTEGER2((MPI_Datatype)&(ompi_mpi_integer));
const Datatype REAL2((MPI_Datatype)&(ompi_mpi_real));
const Datatype INTEGER1((MPI_Datatype)&(ompi_mpi_char));
const Datatype INTEGER4((MPI_Datatype)&(ompi_mpi_short));
const Datatype REAL4((MPI_Datatype)&(ompi_mpi_real));
const Datatype REAL8((MPI_Datatype)&(ompi_mpi_double));
#endif // OMPI_WANT_f77_BINDINGS
// optional datatype (C / C++)
const Datatype UNSIGNED_LONG_LONG(MPI_UNSIGNED_LONG_LONG);
const Datatype LONG_LONG(MPI_LONG_LONG);
const Datatype LONG_LONG_INT(MPI_LONG_LONG_INT);
// c++ types
const Datatype BOOL((MPI_Datatype)&(ompi_mpi_cxx_bool));
const Datatype COMPLEX((MPI_Datatype)&(ompi_mpi_cxx_cplex));
const Datatype DOUBLE_COMPLEX((MPI_Datatype)&(ompi_mpi_cxx_dblcplex));
const Datatype F_DOUBLE_COMPLEX((MPI_Datatype)&(ompi_mpi_cxx_dblcplex));
const Datatype LONG_DOUBLE_COMPLEX((MPI_Datatype)&(ompi_mpi_cxx_ldblcplex));
// reserved communicators
Intracomm COMM_WORLD(MPI_COMM_WORLD);
Intracomm COMM_SELF(MPI_COMM_SELF);
// Reported by Paul Hargrove: MIN and MAX are defined on OpenBSD, so
// we need to #undef them. See
// http://www.open-mpi.org/community/lists/devel/2013/12/13521.php.
#ifdef MAX
#undef MAX
#endif
#ifdef MIN
#undef MIN
#endif
// collective operations
const Op MAX(MPI_MAX);
const Op MIN(MPI_MIN);
const Op SUM(MPI_SUM);
const Op PROD(MPI_PROD);
const Op MAXLOC(MPI_MAXLOC);
const Op MINLOC(MPI_MINLOC);
const Op BAND(MPI_BAND);
const Op BOR(MPI_BOR);
const Op BXOR(MPI_BXOR);
const Op LAND(MPI_LAND);
const Op LOR(MPI_LOR);
const Op LXOR(MPI_LXOR);
const Op REPLACE(MPI_REPLACE);
// null handles
const Group GROUP_NULL = MPI_GROUP_NULL;
const Win WIN_NULL = MPI_WIN_NULL;
const Info INFO_NULL = MPI_INFO_NULL;
//const Comm COMM_NULL = MPI_COMM_NULL;
//const MPI_Comm COMM_NULL = MPI_COMM_NULL;
Comm_Null COMM_NULL;
const Datatype DATATYPE_NULL = MPI_DATATYPE_NULL;
Request REQUEST_NULL = MPI_REQUEST_NULL;
const Op OP_NULL = MPI_OP_NULL;
const Errhandler ERRHANDLER_NULL;
const File FILE_NULL = MPI_FILE_NULL;
// constants specifying empty or ignored input
const char** ARGV_NULL = (const char**) MPI_ARGV_NULL;
const char*** ARGVS_NULL = (const char***) MPI_ARGVS_NULL;
// empty group
const Group GROUP_EMPTY(MPI_GROUP_EMPTY);
#if OMPI_ENABLE_MPI1_COMPAT
// special datatypes for contstruction of derived datatypes
const Datatype UB(MPI_UB);
const Datatype LB(MPI_LB);
#endif
} /* namespace MPI */

Просмотреть файл

@ -1,286 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2008 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// Copyright (c) 2016 Los Alamos National Security, LLC. All rights
// reserved.
// Copyright (c) 2017 Research Organization for Information Science
// and Technology (RIST). All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
#ifndef MPIPP_H
#define MPIPP_H
//
// Let's ensure that we're really in C++, and some errant programmer
// hasn't included <mpicxx.h> just "for completeness"
//
// We do not include the opal_config.h and may not replace extern "C" {
#if defined(c_plusplus) || defined(__cplusplus)
// do not include ompi_config.h. it will smash free() as a symbol
#include "mpi.h"
// we include all this here so that we escape the silly namespacing issues
#include <map>
#include <utility>
#include <stdarg.h>
#if !defined(OMPI_IGNORE_CXX_SEEK) && OMPI_WANT_MPI_CXX_SEEK
// We need to include the header files that define SEEK_* or use them
// in ways that require them to be #defines so that if the user
// includes them later, the double inclusion logic in the headers will
// prevent trouble from occuring.
// include so that we can smash SEEK_* properly
#include <stdio.h>
// include because on Linux, there is one place that assumes SEEK_* is
// a #define (it's used in an enum).
#include <iostream>
static const int ompi_stdio_seek_set = SEEK_SET;
static const int ompi_stdio_seek_cur = SEEK_CUR;
static const int ompi_stdio_seek_end = SEEK_END;
// smash SEEK_* #defines
#ifdef SEEK_SET
#undef SEEK_SET
#undef SEEK_CUR
#undef SEEK_END
#endif
// make globally scoped constants to replace smashed #defines
static const int SEEK_SET = ompi_stdio_seek_set;
static const int SEEK_CUR = ompi_stdio_seek_cur;
static const int SEEK_END = ompi_stdio_seek_end;
#endif
#ifdef OPAL_HAVE_SYS_SYNCH_H
// Solaris threads.h pulls in sys/synch.h which in certain versions
// defines LOCK_SHARED.
// include so that we can smash LOCK_SHARED
#include <sys/synch.h>
// a user app may be included on a system with an older version
// sys/synch.h
#ifdef LOCK_SHARED
static const int ompi_synch_lock_shared = LOCK_SHARED;
// smash LOCK_SHARED #defines
#undef LOCK_SHARED
// make globally scoped constants to replace smashed #defines
static const int LOCK_SHARED = ompi_synch_lock_shared;
#endif
#endif
// forward declare so that we can still do inlining
struct opal_mutex_t;
// See lengthy explanation in intercepts.cc about this function.
extern "C" void
ompi_mpi_cxx_op_intercept(void *invec, void *outvec, int *len,
MPI_Datatype *datatype, MPI_User_function *fn);
//used for attr intercept functions
enum CommType { eIntracomm, eIntercomm, eCartcomm, eGraphcomm};
extern "C" int
ompi_mpi_cxx_comm_copy_attr_intercept(MPI_Comm oldcomm, int keyval,
void *extra_state, void *attribute_val_in,
void *attribute_val_out, int *flag,
MPI_Comm newcomm);
extern "C" int
ompi_mpi_cxx_comm_delete_attr_intercept(MPI_Comm comm, int keyval,
void *attribute_val, void *extra_state);
extern "C" int
ompi_mpi_cxx_type_copy_attr_intercept(MPI_Datatype oldtype, int keyval,
void *extra_state, void *attribute_val_in,
void *attribute_val_out, int *flag);
extern "C" int
ompi_mpi_cxx_type_delete_attr_intercept(MPI_Datatype type, int keyval,
void *attribute_val, void *extra_state);
extern "C" int
ompi_mpi_cxx_win_copy_attr_intercept(MPI_Win oldwin, int keyval,
void *extra_state, void *attribute_val_in,
void *attribute_val_out, int *flag);
extern "C" int
ompi_mpi_cxx_win_delete_attr_intercept(MPI_Win win, int keyval,
void *attribute_val, void *extra_state);
//
// MPI generalized request intercepts
//
extern "C" int
ompi_mpi_cxx_grequest_query_fn_intercept(void *state, MPI_Status *status);
extern "C" int
ompi_mpi_cxx_grequest_free_fn_intercept(void *state);
extern "C" int
ompi_mpi_cxx_grequest_cancel_fn_intercept(void *state, int canceled);
/**
* Windows bool type is not any kind of integer. Special care should
* be taken in order to cast it correctly.
*/
#if defined(WIN32) || defined(_WIN32) || defined(WIN64)
#define OPAL_INT_TO_BOOL(VALUE) ((VALUE) != 0 ? true : false)
#else
#define OPAL_INT_TO_BOOL(VALUE) ((bool)(VALUE))
#endif /* defined(WIN32) || defined(_WIN32) || defined(WIN64) */
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
#include "ompi/mpi/cxx/pmpicxx.h"
#endif
namespace MPI {
#if ! OMPI_HAVE_CXX_EXCEPTION_SUPPORT
extern int mpi_errno;
#endif
class Comm_Null;
class Comm;
class Intracomm;
class Intercomm;
class Graphcomm;
class Cartcomm;
class Datatype;
class Errhandler;
class Group;
class Op;
class Request;
class Grequest;
class Status;
class Info;
class Win;
class File;
typedef MPI_Aint Aint;
typedef MPI_Fint Fint;
typedef MPI_Offset Offset;
#ifdef OMPI_BUILDING_CXX_BINDINGS_LIBRARY
#include "ompi/mpi/cxx/constants.h"
#include "ompi/mpi/cxx/functions.h"
#include "ompi/mpi/cxx/datatype.h"
#else
#include "openmpi/ompi/mpi/cxx/constants.h"
#include "openmpi/ompi/mpi/cxx/functions.h"
#include "openmpi/ompi/mpi/cxx/datatype.h"
#endif
typedef void User_function(const void* invec, void* inoutvec, int len,
const Datatype& datatype);
/* Prevent needing a -I${prefix}/include/openmpi, as it seems to
really annoy peope that don't use the wrapper compilers and is
no longer worth the fight of getting right... */
#ifdef OMPI_BUILDING_CXX_BINDINGS_LIBRARY
#include "ompi/mpi/cxx/exception.h"
#include "ompi/mpi/cxx/op.h"
#include "ompi/mpi/cxx/status.h"
#include "ompi/mpi/cxx/request.h" //includes class Prequest
#include "ompi/mpi/cxx/group.h"
#include "ompi/mpi/cxx/comm.h"
#include "ompi/mpi/cxx/win.h"
#include "ompi/mpi/cxx/file.h"
#include "ompi/mpi/cxx/errhandler.h"
#include "ompi/mpi/cxx/intracomm.h"
#include "ompi/mpi/cxx/topology.h" //includes Cartcomm and Graphcomm
#include "ompi/mpi/cxx/intercomm.h"
#include "ompi/mpi/cxx/info.h"
#else
#include "openmpi/ompi/mpi/cxx/exception.h"
#include "openmpi/ompi/mpi/cxx/op.h"
#include "openmpi/ompi/mpi/cxx/status.h"
#include "openmpi/ompi/mpi/cxx/request.h" //includes class Prequest
#include "openmpi/ompi/mpi/cxx/group.h"
#include "openmpi/ompi/mpi/cxx/comm.h"
#include "openmpi/ompi/mpi/cxx/win.h"
#include "openmpi/ompi/mpi/cxx/file.h"
#include "openmpi/ompi/mpi/cxx/errhandler.h"
#include "openmpi/ompi/mpi/cxx/intracomm.h"
#include "openmpi/ompi/mpi/cxx/topology.h" //includes Cartcomm and Graphcomm
#include "openmpi/ompi/mpi/cxx/intercomm.h"
#include "openmpi/ompi/mpi/cxx/info.h"
#endif
// Version string
extern const char ompi_libcxx_version_string[];
}
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
#include "ompi/mpi/cxx/pop_inln.h"
#include "ompi/mpi/cxx/pgroup_inln.h"
#include "ompi/mpi/cxx/pstatus_inln.h"
#include "ompi/mpi/cxx/prequest_inln.h"
#endif
//
// These are the "real" functions, whether prototyping is enabled
// or not. These functions are assigned to either the MPI::XXX class
// or the PMPI::XXX class based on the value of the macro MPI
// which is set in mpi2cxx_config.h.
// If prototyping is enabled, there is a top layer that calls these
// PMPI functions, and this top layer is in the XXX.cc files.
//
/* see note above... */
#ifdef OMPI_BUILDING_CXX_BINDINGS_LIBRARY
#include "ompi/mpi/cxx/datatype_inln.h"
#include "ompi/mpi/cxx/functions_inln.h"
#include "ompi/mpi/cxx/request_inln.h"
#include "ompi/mpi/cxx/comm_inln.h"
#include "ompi/mpi/cxx/intracomm_inln.h"
#include "ompi/mpi/cxx/topology_inln.h"
#include "ompi/mpi/cxx/intercomm_inln.h"
#include "ompi/mpi/cxx/group_inln.h"
#include "ompi/mpi/cxx/op_inln.h"
#include "ompi/mpi/cxx/errhandler_inln.h"
#include "ompi/mpi/cxx/status_inln.h"
#include "ompi/mpi/cxx/info_inln.h"
#include "ompi/mpi/cxx/win_inln.h"
#include "ompi/mpi/cxx/file_inln.h"
#else
#include "openmpi/ompi/mpi/cxx/datatype_inln.h"
#include "openmpi/ompi/mpi/cxx/functions_inln.h"
#include "openmpi/ompi/mpi/cxx/request_inln.h"
#include "openmpi/ompi/mpi/cxx/comm_inln.h"
#include "openmpi/ompi/mpi/cxx/intracomm_inln.h"
#include "openmpi/ompi/mpi/cxx/topology_inln.h"
#include "openmpi/ompi/mpi/cxx/intercomm_inln.h"
#include "openmpi/ompi/mpi/cxx/group_inln.h"
#include "openmpi/ompi/mpi/cxx/op_inln.h"
#include "openmpi/ompi/mpi/cxx/errhandler_inln.h"
#include "openmpi/ompi/mpi/cxx/status_inln.h"
#include "openmpi/ompi/mpi/cxx/info_inln.h"
#include "openmpi/ompi/mpi/cxx/win_inln.h"
#include "openmpi/ompi/mpi/cxx/file_inln.h"
#endif
#endif // #if defined(c_plusplus) || defined(__cplusplus)
#endif // #ifndef MPIPP_H_

Просмотреть файл

@ -1,65 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Op {
public:
// construction
Op();
Op(MPI_Op i);
Op(const Op& op);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
Op(const PMPI::Op& op) : pmpi_op(op) { }
#endif
// destruction
virtual ~Op();
// assignment
Op& operator=(const Op& op);
Op& operator= (const MPI_Op &i);
// comparison
inline bool operator== (const Op &a);
inline bool operator!= (const Op &a);
// conversion functions for inter-language operability
inline operator MPI_Op () const;
// inline operator MPI_Op* (); //JGS const
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
inline operator const PMPI::Op&() const { return pmpi_op; }
#endif
// Collective Communication
//JGS took const out
virtual void Init(User_function *func, bool commute);
virtual void Free();
virtual void Reduce_local(const void *inbuf, void *inoutbuf, int count,
const MPI::Datatype& datatype) const;
virtual bool Is_commutative(void) const;
#if ! 0 /* OMPI_ENABLE_MPI_PROFILING */
protected:
MPI_Op mpi_op;
#endif
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Op pmpi_op;
#endif
};

Просмотреть файл

@ -1,149 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
inline
MPI::Op::Op() { }
inline
MPI::Op::Op(const MPI::Op& o) : pmpi_op(o.pmpi_op) { }
inline
MPI::Op::Op(MPI_Op o) : pmpi_op(o) { }
inline
MPI::Op::~Op() { }
inline
MPI::Op& MPI::Op::operator=(const MPI::Op& op) {
pmpi_op = op.pmpi_op; return *this;
}
// comparison
inline bool
MPI::Op::operator== (const MPI::Op &a) {
return (bool)(pmpi_op == a.pmpi_op);
}
inline bool
MPI::Op::operator!= (const MPI::Op &a) {
return (bool)!(*this == a);
}
// inter-language operability
inline MPI::Op&
MPI::Op::operator= (const MPI_Op &i) { pmpi_op = i; return *this; }
inline
MPI::Op::operator MPI_Op () const { return pmpi_op; }
//inline
//MPI::Op::operator MPI_Op* () { return pmpi_op; }
#else // ============= NO PROFILING ===================================
// construction
inline
MPI::Op::Op() : mpi_op(MPI_OP_NULL) { }
inline
MPI::Op::Op(MPI_Op i) : mpi_op(i) { }
inline
MPI::Op::Op(const MPI::Op& op)
: mpi_op(op.mpi_op) { }
inline
MPI::Op::~Op()
{
#if 0
mpi_op = MPI_OP_NULL;
op_user_function = 0;
#endif
}
inline MPI::Op&
MPI::Op::operator=(const MPI::Op& op) {
mpi_op = op.mpi_op;
return *this;
}
// comparison
inline bool
MPI::Op::operator== (const MPI::Op &a) { return (bool)(mpi_op == a.mpi_op); }
inline bool
MPI::Op::operator!= (const MPI::Op &a) { return (bool)!(*this == a); }
// inter-language operability
inline MPI::Op&
MPI::Op::operator= (const MPI_Op &i) { mpi_op = i; return *this; }
inline
MPI::Op::operator MPI_Op () const { return mpi_op; }
//inline
//MPI::Op::operator MPI_Op* () { return &mpi_op; }
#endif
// Extern this function here rather than include an internal Open MPI
// header file (and therefore force installing the internal Open MPI
// header file so that user apps can #include it)
extern "C" void ompi_op_set_cxx_callback(MPI_Op op, MPI_User_function*);
// There is a lengthy comment in ompi/mpi/cxx/intercepts.cc explaining
// what this function is doing. Please read it before modifying this
// function.
inline void
MPI::Op::Init(MPI::User_function *func, bool commute)
{
(void)MPI_Op_create((MPI_User_function*) ompi_mpi_cxx_op_intercept,
(int) commute, &mpi_op);
ompi_op_set_cxx_callback(mpi_op, (MPI_User_function*) func);
}
inline void
MPI::Op::Free()
{
(void)MPI_Op_free(&mpi_op);
}
inline void
MPI::Op::Reduce_local(const void *inbuf, void *inoutbuf, int count,
const MPI::Datatype& datatype) const
{
(void)MPI_Reduce_local(const_cast<void*>(inbuf), inoutbuf, count,
datatype, mpi_op);
}
inline bool
MPI::Op::Is_commutative(void) const
{
int commute;
(void)MPI_Op_commutative(mpi_op, &commute);
return (bool) commute;
}

Просмотреть файл

@ -1,235 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Request {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Request;
#endif
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction
Request() { }
Request(MPI_Request i) : pmpi_request(i) { }
// copy / assignment
Request(const Request& r) : pmpi_request(r.pmpi_request) { }
Request(const PMPI::Request& r) : pmpi_request(r) { }
virtual ~Request() {}
Request& operator=(const Request& r) {
pmpi_request = r.pmpi_request; return *this; }
// comparison
bool operator== (const Request &a)
{ return (bool)(pmpi_request == a.pmpi_request); }
bool operator!= (const Request &a)
{ return (bool)!(*this == a); }
// inter-language operability
Request& operator= (const MPI_Request &i) {
pmpi_request = i; return *this; }
operator MPI_Request () const { return pmpi_request; }
// operator MPI_Request* () const { return pmpi_request; }
operator const PMPI::Request&() const { return pmpi_request; }
#else
// construction / destruction
Request() : mpi_request(MPI_REQUEST_NULL) { }
virtual ~Request() {}
Request(MPI_Request i) : mpi_request(i) { }
// copy / assignment
Request(const Request& r) : mpi_request(r.mpi_request) { }
Request& operator=(const Request& r) {
mpi_request = r.mpi_request; return *this; }
// comparison
bool operator== (const Request &a)
{ return (bool)(mpi_request == a.mpi_request); }
bool operator!= (const Request &a)
{ return (bool)!(*this == a); }
// inter-language operability
Request& operator= (const MPI_Request &i) {
mpi_request = i; return *this; }
operator MPI_Request () const { return mpi_request; }
// operator MPI_Request* () const { return (MPI_Request*)&mpi_request; }
#endif
//
// Point-to-Point Communication
//
virtual void Wait(Status &status);
virtual void Wait();
virtual bool Test(Status &status);
virtual bool Test();
virtual void Free(void);
static int Waitany(int count, Request array[], Status& status);
static int Waitany(int count, Request array[]);
static bool Testany(int count, Request array[], int& index, Status& status);
static bool Testany(int count, Request array[], int& index);
static void Waitall(int count, Request req_array[], Status stat_array[]);
static void Waitall(int count, Request req_array[]);
static bool Testall(int count, Request req_array[], Status stat_array[]);
static bool Testall(int count, Request req_array[]);
static int Waitsome(int incount, Request req_array[],
int array_of_indices[], Status stat_array[]) ;
static int Waitsome(int incount, Request req_array[],
int array_of_indices[]);
static int Testsome(int incount, Request req_array[],
int array_of_indices[], Status stat_array[]);
static int Testsome(int incount, Request req_array[],
int array_of_indices[]);
virtual void Cancel(void) const;
virtual bool Get_status(Status& status) const;
virtual bool Get_status() const;
protected:
#if ! 0 /* OMPI_ENABLE_MPI_PROFILING */
MPI_Request mpi_request;
#endif
private:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Request pmpi_request;
#endif
};
class Prequest : public Request {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Prequest;
#endif
public:
Prequest() { }
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
Prequest(const Request& p) : Request(p), pmpi_request(p) { }
Prequest(const PMPI::Prequest& r) :
Request((const PMPI::Request&)r),
pmpi_request(r) { }
Prequest(const MPI_Request &i) : Request(i), pmpi_request(i) { }
virtual ~Prequest() { }
Prequest& operator=(const Request& r) {
Request::operator=(r);
pmpi_request = (PMPI::Prequest)r; return *this; }
Prequest& operator=(const Prequest& r) {
Request::operator=(r);
pmpi_request = r.pmpi_request; return *this; }
#else
Prequest(const Request& p) : Request(p) { }
Prequest(const MPI_Request &i) : Request(i) { }
virtual ~Prequest() { }
Prequest& operator=(const Request& r) {
mpi_request = r; return *this; }
Prequest& operator=(const Prequest& r) {
mpi_request = r.mpi_request; return *this; }
#endif
virtual void Start();
static void Startall(int count, Prequest array_of_requests[]);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Prequest pmpi_request;
#endif
};
//
// Generalized requests
//
class Grequest : public MPI::Request {
public:
typedef int Query_function(void *, Status&);
typedef int Free_function(void *);
typedef int Cancel_function(void *, bool);
Grequest() {}
Grequest(const Request& req) : Request(req) {}
Grequest(const MPI_Request &req) : Request(req) {}
virtual ~Grequest() {}
Grequest& operator=(const Request& req) {
mpi_request = req; return(*this);
}
Grequest& operator=(const Grequest& req) {
mpi_request = req.mpi_request; return(*this);
}
static Grequest Start(Query_function *, Free_function *,
Cancel_function *, void *);
virtual void Complete();
//
// Type used for intercepting Generalized requests in the C++ layer so
// that the type can be converted to C++ types before invoking the
// user-specified C++ callbacks.
//
struct Intercept_data_t {
void *id_extra;
Grequest::Query_function *id_cxx_query_fn;
Grequest::Free_function *id_cxx_free_fn;
Grequest::Cancel_function *id_cxx_cancel_fn;
};
};

Просмотреть файл

@ -1,366 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Point-to-Point Communication
//
inline void
MPI::Request::Wait(MPI::Status &status)
{
(void)MPI_Wait(&mpi_request, &status.mpi_status);
}
inline void
MPI::Request::Wait()
{
(void)MPI_Wait(&mpi_request, MPI_STATUS_IGNORE);
}
inline void
MPI::Request::Free()
{
(void)MPI_Request_free(&mpi_request);
}
inline bool
MPI::Request::Test(MPI::Status &status)
{
int t;
(void)MPI_Test(&mpi_request, &t, &status.mpi_status);
return OPAL_INT_TO_BOOL(t);
}
inline bool
MPI::Request::Test()
{
int t;
(void)MPI_Test(&mpi_request, &t, MPI_STATUS_IGNORE);
return OPAL_INT_TO_BOOL(t);
}
inline int
MPI::Request::Waitany(int count, MPI::Request array[],
MPI::Status& status)
{
int index, i;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = array[i];
}
(void)MPI_Waitany(count, array_of_requests, &index, &status.mpi_status);
for (i=0; i < count; i++) {
array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return index;
}
inline int
MPI::Request::Waitany(int count, MPI::Request array[])
{
int index, i;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = array[i];
}
(void)MPI_Waitany(count, array_of_requests, &index, MPI_STATUS_IGNORE);
for (i=0; i < count; i++) {
array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return index; //JGS, Waitany return value
}
inline bool
MPI::Request::Testany(int count, MPI::Request array[],
int& index, MPI::Status& status)
{
int i, flag;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = array[i];
}
(void)MPI_Testany(count, array_of_requests, &index, &flag, &status.mpi_status);
for (i=0; i < count; i++) {
array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return (bool)(flag != 0 ? true : false);
}
inline bool
MPI::Request::Testany(int count, MPI::Request array[], int& index)
{
int i, flag;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = array[i];
}
(void)MPI_Testany(count, array_of_requests, &index, &flag,
MPI_STATUS_IGNORE);
for (i=0; i < count; i++) {
array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return OPAL_INT_TO_BOOL(flag);
}
inline void
MPI::Request::Waitall(int count, MPI::Request req_array[],
MPI::Status stat_array[])
{
int i;
MPI_Request* array_of_requests = new MPI_Request[count];
MPI_Status* array_of_statuses = new MPI_Status[count];
for (i=0; i < count; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Waitall(count, array_of_requests, array_of_statuses);
for (i=0; i < count; i++) {
req_array[i] = array_of_requests[i];
stat_array[i] = array_of_statuses[i];
}
delete [] array_of_requests;
delete [] array_of_statuses;
}
inline void
MPI::Request::Waitall(int count, MPI::Request req_array[])
{
int i;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Waitall(count, array_of_requests, MPI_STATUSES_IGNORE);
for (i=0; i < count; i++) {
req_array[i] = array_of_requests[i];
}
delete [] array_of_requests;
}
inline bool
MPI::Request::Testall(int count, MPI::Request req_array[],
MPI::Status stat_array[])
{
int i, flag;
MPI_Request* array_of_requests = new MPI_Request[count];
MPI_Status* array_of_statuses = new MPI_Status[count];
for (i=0; i < count; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Testall(count, array_of_requests, &flag, array_of_statuses);
for (i=0; i < count; i++) {
req_array[i] = array_of_requests[i];
stat_array[i] = array_of_statuses[i];
}
delete [] array_of_requests;
delete [] array_of_statuses;
return OPAL_INT_TO_BOOL(flag);
}
inline bool
MPI::Request::Testall(int count, MPI::Request req_array[])
{
int i, flag;
MPI_Request* array_of_requests = new MPI_Request[count];
for (i=0; i < count; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Testall(count, array_of_requests, &flag, MPI_STATUSES_IGNORE);
for (i=0; i < count; i++) {
req_array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return OPAL_INT_TO_BOOL(flag);
}
inline int
MPI::Request::Waitsome(int incount, MPI::Request req_array[],
int array_of_indices[], MPI::Status stat_array[])
{
int i, outcount;
MPI_Request* array_of_requests = new MPI_Request[incount];
MPI_Status* array_of_statuses = new MPI_Status[incount];
for (i=0; i < incount; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Waitsome(incount, array_of_requests, &outcount,
array_of_indices, array_of_statuses);
for (i=0; i < incount; i++) {
req_array[i] = array_of_requests[i];
stat_array[i] = array_of_statuses[i];
}
delete [] array_of_requests;
delete [] array_of_statuses;
return outcount;
}
inline int
MPI::Request::Waitsome(int incount, MPI::Request req_array[],
int array_of_indices[])
{
int i, outcount;
MPI_Request* array_of_requests = new MPI_Request[incount];
for (i=0; i < incount; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Waitsome(incount, array_of_requests, &outcount,
array_of_indices, MPI_STATUSES_IGNORE);
for (i=0; i < incount; i++) {
req_array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return outcount;
}
inline int
MPI::Request::Testsome(int incount, MPI::Request req_array[],
int array_of_indices[], MPI::Status stat_array[])
{
int i, outcount;
MPI_Request* array_of_requests = new MPI_Request[incount];
MPI_Status* array_of_statuses = new MPI_Status[incount];
for (i=0; i < incount; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Testsome(incount, array_of_requests, &outcount,
array_of_indices, array_of_statuses);
for (i=0; i < incount; i++) {
req_array[i] = array_of_requests[i];
stat_array[i] = array_of_statuses[i];
}
delete [] array_of_requests;
delete [] array_of_statuses;
return outcount;
}
inline int
MPI::Request::Testsome(int incount, MPI::Request req_array[],
int array_of_indices[])
{
int i, outcount;
MPI_Request* array_of_requests = new MPI_Request[incount];
for (i=0; i < incount; i++) {
array_of_requests[i] = req_array[i];
}
(void)MPI_Testsome(incount, array_of_requests, &outcount,
array_of_indices, MPI_STATUSES_IGNORE);
for (i=0; i < incount; i++) {
req_array[i] = array_of_requests[i];
}
delete [] array_of_requests;
return outcount;
}
inline void
MPI::Request::Cancel(void) const
{
(void)MPI_Cancel(const_cast<MPI_Request *>(&mpi_request));
}
inline void
MPI::Prequest::Start()
{
(void)MPI_Start(&mpi_request);
}
inline void
MPI::Prequest::Startall(int count, MPI:: Prequest array_of_requests[])
{
//convert the array of Prequests to an array of MPI_requests
MPI_Request* mpi_requests = new MPI_Request[count];
int i;
for (i=0; i < count; i++) {
mpi_requests[i] = array_of_requests[i];
}
(void)MPI_Startall(count, mpi_requests);
for (i=0; i < count; i++) {
array_of_requests[i].mpi_request = mpi_requests[i] ;
}
delete [] mpi_requests;
}
inline bool MPI::Request::Get_status(MPI::Status& status) const
{
int flag = 0;
MPI_Status c_status;
// Call the underlying MPI function rather than simply returning
// status.mpi_status because we may have to invoke the generalized
// request query function
(void)MPI_Request_get_status(mpi_request, &flag, &c_status);
if (flag) {
status = c_status;
}
return OPAL_INT_TO_BOOL(flag);
}
inline bool MPI::Request::Get_status() const
{
int flag;
// Call the underlying MPI function rather than simply returning
// status.mpi_status because we may have to invoke the generalized
// request query function
(void)MPI_Request_get_status(mpi_request, &flag, MPI_STATUS_IGNORE);
return OPAL_INT_TO_BOOL(flag);
}
inline MPI::Grequest
MPI::Grequest::Start(Query_function *query_fn, Free_function *free_fn,
Cancel_function *cancel_fn, void *extra)
{
MPI_Request grequest = 0;
Intercept_data_t *new_extra =
new MPI::Grequest::Intercept_data_t;
new_extra->id_extra = extra;
new_extra->id_cxx_query_fn = query_fn;
new_extra->id_cxx_free_fn = free_fn;
new_extra->id_cxx_cancel_fn = cancel_fn;
(void) MPI_Grequest_start(ompi_mpi_cxx_grequest_query_fn_intercept,
ompi_mpi_cxx_grequest_free_fn_intercept,
ompi_mpi_cxx_grequest_cancel_fn_intercept,
new_extra, &grequest);
return(grequest);
}
inline void
MPI::Grequest::Complete()
{
(void) MPI_Grequest_complete(mpi_request);
}

Просмотреть файл

@ -1,115 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2008 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2017 Research Organization for Information Science
// and Technology (RIST). All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Status {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class PMPI::Status;
#endif
friend class MPI::Comm; //so I can access pmpi_status data member in comm.cc
friend class MPI::Request; //and also from request.cc
friend class MPI::File;
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction / destruction
Status() { }
virtual ~Status() {}
// copy / assignment
Status(const Status& data) : pmpi_status(data.pmpi_status) { }
Status(const MPI_Status &i) : pmpi_status(i) { }
Status& operator=(const Status& data) {
pmpi_status = data.pmpi_status; return *this; }
// comparison, don't need for status
// inter-language operability
Status& operator= (const MPI_Status &i) {
pmpi_status = i; return *this; }
operator MPI_Status () const { return pmpi_status; }
// operator MPI_Status* () const { return pmpi_status; }
operator const PMPI::Status&() const { return pmpi_status; }
#else
Status() : mpi_status() { }
// copy
Status(const Status& data) : mpi_status(data.mpi_status) { }
Status(const MPI_Status &i) : mpi_status(i) { }
virtual ~Status() {}
Status& operator=(const Status& data) {
mpi_status = data.mpi_status; return *this; }
// comparison, don't need for status
// inter-language operability
Status& operator= (const MPI_Status &i) {
mpi_status = i; return *this; }
operator MPI_Status () const { return mpi_status; }
// operator MPI_Status* () const { return (MPI_Status*)&mpi_status; }
#endif
//
// Point-to-Point Communication
//
virtual int Get_count(const Datatype& datatype) const;
virtual bool Is_cancelled() const;
virtual int Get_elements(const Datatype& datatype) const;
//
// Status Access
//
virtual int Get_source() const;
virtual void Set_source(int source);
virtual int Get_tag() const;
virtual void Set_tag(int tag);
virtual int Get_error() const;
virtual void Set_error(int error);
virtual void Set_elements(const MPI::Datatype& datatype, int count);
virtual void Set_cancelled(bool flag);
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Status pmpi_status;
#else
MPI_Status mpi_status;
#endif
};

Просмотреть файл

@ -1,105 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Point-to-Point Communication
//
inline int
MPI::Status::Get_count(const MPI::Datatype& datatype) const
{
int count;
(void)MPI_Get_count(const_cast<MPI_Status*>(&mpi_status), datatype, &count);
return count;
}
inline bool
MPI::Status::Is_cancelled() const
{
int t;
(void)MPI_Test_cancelled(const_cast<MPI_Status*>(&mpi_status), &t);
return OPAL_INT_TO_BOOL(t);
}
inline int
MPI::Status::Get_elements(const MPI::Datatype& datatype) const
{
int count;
(void)MPI_Get_elements(const_cast<MPI_Status*>(&mpi_status), datatype, &count);
return count;
}
//
// Status Access
//
inline int
MPI::Status::Get_source() const
{
int source;
source = mpi_status.MPI_SOURCE;
return source;
}
inline void
MPI::Status::Set_source(int source)
{
mpi_status.MPI_SOURCE = source;
}
inline int
MPI::Status::Get_tag() const
{
int tag;
tag = mpi_status.MPI_TAG;
return tag;
}
inline void
MPI::Status::Set_tag(int tag)
{
mpi_status.MPI_TAG = tag;
}
inline int
MPI::Status::Get_error() const
{
int error;
error = mpi_status.MPI_ERROR;
return error;
}
inline void
MPI::Status::Set_error(int error)
{
mpi_status.MPI_ERROR = error;
}
inline void
MPI::Status::Set_elements(const MPI::Datatype& datatype, int count)
{
MPI_Status_set_elements(&mpi_status, datatype, count);
}
inline void
MPI::Status::Set_cancelled(bool flag)
{
MPI_Status_set_cancelled(&mpi_status, (int) flag);
}

Просмотреть файл

@ -1,167 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Cartcomm : public Intracomm {
public:
// construction
Cartcomm() { }
// copy
Cartcomm(const Comm_Null& data) : Intracomm(data) { }
// inter-language operability
inline Cartcomm(const MPI_Comm& data);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
Cartcomm(const Cartcomm& data) : Intracomm(data), pmpi_comm(data) { }
Cartcomm(const PMPI::Cartcomm& d) :
Intracomm((const PMPI::Intracomm&)d),
pmpi_comm(d) { }
// assignment
Cartcomm& operator=(const Cartcomm& data) {
Intracomm::operator=(data);
pmpi_comm = data.pmpi_comm; return *this; }
Cartcomm& operator=(const Comm_Null& data) {
Intracomm::operator=(data);
pmpi_comm = (PMPI::Cartcomm)data; return *this; }
// inter-language operability
Cartcomm& operator=(const MPI_Comm& data) {
Intracomm::operator=(data);
pmpi_comm = data; return *this; }
#else
Cartcomm(const Cartcomm& data) : Intracomm(data.mpi_comm) { }
// assignment
Cartcomm& operator=(const Cartcomm& data) {
mpi_comm = data.mpi_comm; return *this; }
Cartcomm& operator=(const Comm_Null& data) {
mpi_comm = data; return *this; }
// inter-language operability
Cartcomm& operator=(const MPI_Comm& data) {
mpi_comm = data; return *this; }
#endif
//
// Groups, Contexts, and Communicators
//
Cartcomm Dup() const;
virtual Cartcomm& Clone() const;
//
// Groups, Contexts, and Communicators
//
virtual int Get_dim() const;
virtual void Get_topo(int maxdims, int dims[], bool periods[],
int coords[]) const;
virtual int Get_cart_rank(const int coords[]) const;
virtual void Get_coords(int rank, int maxdims, int coords[]) const;
virtual void Shift(int direction, int disp,
int &rank_source, int &rank_dest) const;
virtual Cartcomm Sub(const bool remain_dims[]) const;
virtual int Map(int ndims, const int dims[], const bool periods[]) const;
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Cartcomm pmpi_comm;
#endif
};
//===================================================================
// Class Graphcomm
//===================================================================
class Graphcomm : public Intracomm {
public:
// construction
Graphcomm() { }
// copy
Graphcomm(const Comm_Null& data) : Intracomm(data) { }
// inter-language operability
inline Graphcomm(const MPI_Comm& data);
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
Graphcomm(const Graphcomm& data) : Intracomm(data), pmpi_comm(data) { }
Graphcomm(const PMPI::Graphcomm& d) :
Intracomm((const PMPI::Intracomm&)d), pmpi_comm(d) { }
// assignment
Graphcomm& operator=(const Graphcomm& data) {
Intracomm::operator=(data);
pmpi_comm = data.pmpi_comm; return *this; }
Graphcomm& operator=(const Comm_Null& data) {
Intracomm::operator=(data);
pmpi_comm = (PMPI::Graphcomm)data; return *this; }
// inter-language operability
Graphcomm& operator=(const MPI_Comm& data) {
Intracomm::operator=(data);
pmpi_comm = data; return *this; }
#else
Graphcomm(const Graphcomm& data) : Intracomm(data.mpi_comm) { }
// assignment
Graphcomm& operator=(const Graphcomm& data) {
mpi_comm = data.mpi_comm; return *this; }
Graphcomm& operator=(const Comm_Null& data) {
mpi_comm = data; return *this; }
// inter-language operability
Graphcomm& operator=(const MPI_Comm& data) {
mpi_comm = data; return *this; }
#endif
//
// Groups, Contexts, and Communicators
//
Graphcomm Dup() const;
virtual Graphcomm& Clone() const;
//
// Process Topologies
//
virtual void Get_dims(int nnodes[], int nedges[]) const;
virtual void Get_topo(int maxindex, int maxedges, int index[],
int edges[]) const;
virtual int Get_neighbors_count(int rank) const;
virtual void Get_neighbors(int rank, int maxneighbors,
int neighbors[]) const;
virtual int Map(int nnodes, const int index[],
const int edges[]) const;
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
private:
PMPI::Graphcomm pmpi_comm;
#endif
};

Просмотреть файл

@ -1,220 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2011 FUJITSU LIMITED. All rights reserved.
// Copyright (c) 2016 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// ======== Cartcomm member functions ========
//
inline
MPI::Cartcomm::Cartcomm(const MPI_Comm& data) {
int status = 0;
if (MPI::Is_initialized() && (data != MPI_COMM_NULL)) {
(void)MPI_Topo_test(data, &status) ;
if (status == MPI_CART)
mpi_comm = data;
else
mpi_comm = MPI_COMM_NULL;
}
else {
mpi_comm = data;
}
}
//
// Groups, Contexts, and Communicators
//
inline MPI::Cartcomm
MPI::Cartcomm::Dup() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
return newcomm;
}
//
// Process Topologies
//
inline int
MPI::Cartcomm::Get_dim() const
{
int ndims;
(void)MPI_Cartdim_get(mpi_comm, &ndims);
return ndims;
}
inline void
MPI::Cartcomm::Get_topo(int maxdims, int dims[], bool periods[],
int coords[]) const
{
int *int_periods = new int [maxdims];
int i;
for (i=0; i<maxdims; i++) {
int_periods[i] = (int)periods[i];
}
(void)MPI_Cart_get(mpi_comm, maxdims, dims, int_periods, coords);
for (i=0; i<maxdims; i++) {
periods[i] = OPAL_INT_TO_BOOL(int_periods[i]);
}
delete [] int_periods;
}
inline int
MPI::Cartcomm::Get_cart_rank(const int coords[]) const
{
int myrank;
(void)MPI_Cart_rank(mpi_comm, const_cast<int *>(coords), &myrank);
return myrank;
}
inline void
MPI::Cartcomm::Get_coords(int rank, int maxdims, int coords[]) const
{
(void)MPI_Cart_coords(mpi_comm, rank, maxdims, coords);
}
inline void
MPI::Cartcomm::Shift(int direction, int disp,
int &rank_source, int &rank_dest) const
{
(void)MPI_Cart_shift(mpi_comm, direction, disp, &rank_source, &rank_dest);
}
inline MPI::Cartcomm
MPI::Cartcomm::Sub(const bool remain_dims[]) const
{
int ndims;
MPI_Cartdim_get(mpi_comm, &ndims);
int* int_remain_dims = new int[ndims];
for (int i=0; i<ndims; i++) {
int_remain_dims[i] = (int)remain_dims[i];
}
MPI_Comm newcomm;
(void)MPI_Cart_sub(mpi_comm, int_remain_dims, &newcomm);
delete [] int_remain_dims;
return newcomm;
}
inline int
MPI::Cartcomm::Map(int ndims, const int dims[], const bool periods[]) const
{
int *int_periods = new int [ndims];
for (int i=0; i<ndims; i++) {
int_periods[i] = (int) periods[i];
}
int newrank;
(void)MPI_Cart_map(mpi_comm, ndims, const_cast<int *>(dims), int_periods, &newrank);
delete [] int_periods;
return newrank;
}
inline MPI::Cartcomm&
MPI::Cartcomm::Clone() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
MPI::Cartcomm* dup = new MPI::Cartcomm(newcomm);
return *dup;
}
//
// ======== Graphcomm member functions ========
//
inline
MPI::Graphcomm::Graphcomm(const MPI_Comm& data) {
int status = 0;
if (MPI::Is_initialized() && (data != MPI_COMM_NULL)) {
(void)MPI_Topo_test(data, &status) ;
if (status == MPI_GRAPH)
mpi_comm = data;
else
mpi_comm = MPI_COMM_NULL;
}
else {
mpi_comm = data;
}
}
//
// Groups, Contexts, and Communicators
//
inline MPI::Graphcomm
MPI::Graphcomm::Dup() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
return newcomm;
}
//
// Process Topologies
//
inline void
MPI::Graphcomm::Get_dims(int nnodes[], int nedges[]) const
{
(void)MPI_Graphdims_get(mpi_comm, nnodes, nedges);
}
inline void
MPI::Graphcomm::Get_topo(int maxindex, int maxedges, int index[],
int edges[]) const
{
(void)MPI_Graph_get(mpi_comm, maxindex, maxedges, index, edges);
}
inline int
MPI::Graphcomm::Get_neighbors_count(int rank) const
{
int nneighbors;
(void)MPI_Graph_neighbors_count(mpi_comm, rank, &nneighbors);
return nneighbors;
}
inline void
MPI::Graphcomm::Get_neighbors(int rank, int maxneighbors,
int neighbors[]) const
{
(void)MPI_Graph_neighbors(mpi_comm, rank, maxneighbors, neighbors);
}
inline int
MPI::Graphcomm::Map(int nnodes, const int index[],
const int edges[]) const
{
int newrank;
(void)MPI_Graph_map(mpi_comm, nnodes, const_cast<int *>(index), const_cast<int *>(edges), &newrank);
return newrank;
}
inline MPI::Graphcomm&
MPI::Graphcomm::Clone() const
{
MPI_Comm newcomm;
(void)MPI_Comm_dup(mpi_comm, &newcomm);
MPI::Graphcomm* dup = new MPI::Graphcomm(newcomm);
return *dup;
}

Просмотреть файл

@ -1,113 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2006-2016 Los Alamos National Security, LLC. All rights
// reserved.
// Copyright (c) 2007-2008 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2007-2009 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
// do not include ompi_config.h because it kills the free/malloc defines
#include "mpi.h"
#include "ompi/constants.h"
#include "ompi/mpi/cxx/mpicxx.h"
#include "cxx_glue.h"
void
MPI::Win::Free()
{
(void) MPI_Win_free(&mpi_win);
}
// This function needs some internal OMPI types, so it's not inlined
MPI::Errhandler
MPI::Win::Create_errhandler(MPI::Win::Errhandler_function* function)
{
return ompi_cxx_errhandler_create_win ((ompi_cxx_dummy_fn_t *) function);
}
int
MPI::Win::do_create_keyval(MPI_Win_copy_attr_function* c_copy_fn,
MPI_Win_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval)
{
int ret, count = 0;
keyval_intercept_data_t *cxx_extra_state;
// If both the callbacks are C, then do the simple thing -- no
// need for all the C++ machinery.
if (NULL != c_copy_fn && NULL != c_delete_fn) {
ret = ompi_cxx_attr_create_keyval_win (c_copy_fn, c_delete_fn, &keyval,
extra_state, 0, NULL);
if (MPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, ret,
"MPI::Win::Create_keyval");
}
}
// If either callback is C++, then we have to use the C++
// callbacks for both, because we have to generate a new
// extra_state. And since we only get one extra_state (i.e., we
// don't get one extra_state for the copy callback and another
// extra_state for the delete callback), we have to use the C++
// callbacks for both (and therefore translate the C++-special
// extra_state into the user's original extra_state).
cxx_extra_state = (keyval_intercept_data_t*)
malloc(sizeof(keyval_intercept_data_t));
if (NULL == cxx_extra_state) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_NO_MEM,
"MPI::Win::Create_keyval");
}
cxx_extra_state->c_copy_fn = c_copy_fn;
cxx_extra_state->cxx_copy_fn = cxx_copy_fn;
cxx_extra_state->c_delete_fn = c_delete_fn;
cxx_extra_state->cxx_delete_fn = cxx_delete_fn;
cxx_extra_state->extra_state = extra_state;
// Error check. Must have exactly 2 non-NULL function pointers.
if (NULL != c_copy_fn) {
++count;
}
if (NULL != c_delete_fn) {
++count;
}
if (NULL != cxx_copy_fn) {
++count;
}
if (NULL != cxx_delete_fn) {
++count;
}
if (2 != count) {
free(cxx_extra_state);
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, MPI_ERR_ARG,
"MPI::Win::Create_keyval");
}
// We do not call MPI_Win_create_keyval() here because we need to
// pass in a special destructor to the backend keyval creation
// that gets invoked when the keyval's reference count goes to 0
// and is finally destroyed (i.e., clean up some caching/lookup
// data here in the C++ bindings layer). This destructor is
// *only* used in the C++ bindings, so it's not set by the C
// MPI_Comm_create_keyval(). Hence, we do all the work here (and
// ensure to set the destructor atomicly when the keyval is
// created).
ret = ompi_cxx_attr_create_keyval_win ((MPI_Win_copy_attr_function *) ompi_mpi_cxx_win_copy_attr_intercept,
ompi_mpi_cxx_win_delete_attr_intercept, &keyval,
cxx_extra_state, 0, NULL);
if (OMPI_SUCCESS != ret) {
return ompi_cxx_errhandler_invoke_comm (MPI_COMM_WORLD, ret,
"MPI::Win::Create_keyval");
}
return MPI_SUCCESS;
}

Просмотреть файл

@ -1,212 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2006-2009 Cisco Systems, Inc. All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
class Win {
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// friend class P;
#endif
friend class MPI::Comm; //so I can access pmpi_win data member in comm.cc
friend class MPI::Request; //and also from request.cc
public:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
// construction / destruction
Win() { }
virtual ~Win() { }
// copy / assignment
Win(const Win& data) : pmpi_win(data.pmpi_win) { }
Win(MPI_Win i) : pmpi_win(i) { }
Win& operator=(const Win& data) {
pmpi_win = data.pmpi_win; return *this; }
// comparison, don't need for win
// inter-language operability
Win& operator= (const MPI_Win &i) {
pmpi_win = i; return *this; }
operator MPI_Win () const { return pmpi_win; }
// operator MPI_Win* () const { return pmpi_win; }
operator const PMPI::Win&() const { return pmpi_win; }
#else
Win() : mpi_win(MPI_WIN_NULL) { }
// copy
Win(const Win& data) : mpi_win(data.mpi_win) { }
Win(MPI_Win i) : mpi_win(i) { }
virtual ~Win() { }
Win& operator=(const Win& data) {
mpi_win = data.mpi_win; return *this; }
// comparison, don't need for win
// inter-language operability
Win& operator= (const MPI_Win &i) {
mpi_win = i; return *this; }
operator MPI_Win () const { return mpi_win; }
// operator MPI_Win* () const { return (MPI_Win*)&mpi_win; }
#endif
//
// User defined functions
//
typedef int Copy_attr_function(const Win& oldwin, int win_keyval,
void* extra_state, void* attribute_val_in,
void* attribute_val_out, bool& flag);
typedef int Delete_attr_function(Win& win, int win_keyval,
void* attribute_val, void* extra_state);
typedef void Errhandler_function(Win &, int *, ... );
typedef Errhandler_function Errhandler_fn
__mpi_interface_deprecated__("MPI::Win::Errhandler_fn was deprecated in MPI-2.2; use MPI::Win::Errhandler_function instead");
//
// Errhandler
//
static MPI::Errhandler Create_errhandler(Errhandler_function* function);
virtual void Set_errhandler(const MPI::Errhandler& errhandler) const;
virtual MPI::Errhandler Get_errhandler() const;
//
// One sided communication
//
virtual void Accumulate(const void* origin_addr, int origin_count,
const MPI::Datatype& origin_datatype,
int target_rank, MPI::Aint target_disp,
int target_count,
const MPI::Datatype& target_datatype,
const MPI::Op& op) const;
virtual void Complete() const;
static Win Create(const void* base, MPI::Aint size, int disp_unit,
const MPI::Info& info, const MPI::Intracomm& comm);
virtual void Fence(int assert) const;
virtual void Free();
virtual void Get(const void *origin_addr, int origin_count,
const MPI::Datatype& origin_datatype, int target_rank,
MPI::Aint target_disp, int target_count,
const MPI::Datatype& target_datatype) const;
virtual MPI::Group Get_group() const;
virtual void Lock(int lock_type, int rank, int assert) const;
virtual void Post(const MPI::Group& group, int assert) const;
virtual void Put(const void* origin_addr, int origin_count,
const MPI::Datatype& origin_datatype, int target_rank,
MPI::Aint target_disp, int target_count,
const MPI::Datatype& target_datatype) const;
virtual void Start(const MPI::Group& group, int assert) const;
virtual bool Test() const;
virtual void Unlock(int rank) const;
virtual void Wait() const;
//
// External Interfaces
//
virtual void Call_errhandler(int errorcode) const;
// Need 4 overloaded versions of this function because per the
// MPI-2 spec, you can mix-n-match the C predefined functions with
// C++ functions.
static int Create_keyval(Copy_attr_function* win_copy_attr_fn,
Delete_attr_function* win_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Win_copy_attr_function* win_copy_attr_fn,
MPI_Win_delete_attr_function* win_delete_attr_fn,
void* extra_state);
static int Create_keyval(Copy_attr_function* win_copy_attr_fn,
MPI_Win_delete_attr_function* win_delete_attr_fn,
void* extra_state);
static int Create_keyval(MPI_Win_copy_attr_function* win_copy_attr_fn,
Delete_attr_function* win_delete_attr_fn,
void* extra_state);
protected:
// Back-end function to do the heavy lifting for creating the
// keyval
static int do_create_keyval(MPI_Win_copy_attr_function* c_copy_fn,
MPI_Win_delete_attr_function* c_delete_fn,
Copy_attr_function* cxx_copy_fn,
Delete_attr_function* cxx_delete_fn,
void* extra_state, int &keyval);
public:
virtual void Delete_attr(int win_keyval);
static void Free_keyval(int& win_keyval);
// version 1: pre-errata Get_attr (not correct, but probably nice to support
bool Get_attr(const Win& win, int win_keyval,
void* attribute_val) const;
// version 2: post-errata Get_attr (correct, but no one seems to know about it)
bool Get_attr(int win_keyval, void* attribute_val) const;
virtual void Get_name(char* win_name, int& resultlen) const;
virtual void Set_attr(int win_keyval, const void* attribute_val);
virtual void Set_name(const char* win_name);
// Data that is passed through keyval create when C++ callback
// functions are used
struct keyval_intercept_data_t {
MPI_Win_copy_attr_function *c_copy_fn;
MPI_Win_delete_attr_function *c_delete_fn;
Copy_attr_function* cxx_copy_fn;
Delete_attr_function* cxx_delete_fn;
void *extra_state;
};
// Protect the global list from multiple thread access
static opal_mutex_t cxx_extra_states_lock;
protected:
#if 0 /* OMPI_ENABLE_MPI_PROFILING */
PMPI::Win pmpi_win;
#else
MPI_Win mpi_win;
#endif
};

Просмотреть файл

@ -1,295 +0,0 @@
// -*- c++ -*-
//
// Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
// University Research and Technology
// Corporation. All rights reserved.
// Copyright (c) 2004-2005 The University of Tennessee and The University
// of Tennessee Research Foundation. All rights
// reserved.
// Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
// University of Stuttgart. All rights reserved.
// Copyright (c) 2004-2005 The Regents of the University of California.
// All rights reserved.
// Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
// Copyright (c) 2007-2008 Cisco Systems, Inc. All rights reserved.
// $COPYRIGHT$
//
// Additional copyrights may follow
//
// $HEADER$
//
//
// Miscellany
//
inline MPI::Errhandler
MPI::Win:: Get_errhandler() const
{
MPI_Errhandler errhandler;
MPI_Win_get_errhandler(mpi_win, &errhandler);
return errhandler;
}
inline void
MPI::Win::Set_errhandler(const MPI::Errhandler& errhandler) const
{
(void)MPI_Win_set_errhandler(mpi_win, errhandler);
}
//
// One sided communication
//
inline void
MPI::Win::Accumulate(const void* origin_addr, int origin_count,
const MPI::Datatype& origin_datatype, int target_rank,
MPI::Aint target_disp, int target_count,
const MPI::Datatype& target_datatype,
const MPI::Op& op) const
{
(void) MPI_Accumulate(const_cast<void *>(origin_addr), origin_count, origin_datatype,
target_rank, target_disp, target_count,
target_datatype, op, mpi_win);
}
inline void
MPI::Win::Complete() const
{
(void) MPI_Win_complete(mpi_win);
}
inline MPI::Win
MPI::Win::Create(const void* base, MPI::Aint size,
int disp_unit, const MPI::Info& info,
const MPI::Intracomm& comm)
{
MPI_Win newwin;
(void) MPI_Win_create(const_cast<void *>(base), size, disp_unit, info, comm, &newwin);
return newwin;
}
inline void
MPI::Win::Fence(int assert) const
{
(void) MPI_Win_fence(assert, mpi_win);
}
inline void
MPI::Win::Get(const void *origin_addr, int origin_count,
const MPI::Datatype& origin_datatype,
int target_rank, MPI::Aint target_disp,
int target_count,
const MPI::Datatype& target_datatype) const
{
(void) MPI_Get(const_cast<void *>(origin_addr), origin_count, origin_datatype,
target_rank, target_disp,
target_count, target_datatype, mpi_win);
}
inline MPI::Group
MPI::Win::Get_group() const
{
MPI_Group mpi_group;
(void) MPI_Win_get_group(mpi_win, &mpi_group);
return mpi_group;
}
inline void
MPI::Win::Lock(int lock_type, int rank, int assert) const
{
(void) MPI_Win_lock(lock_type, rank, assert, mpi_win);
}
inline void
MPI::Win::Post(const MPI::Group& group, int assert) const
{
(void) MPI_Win_post(group, assert, mpi_win);
}
inline void
MPI::Win::Put(const void* origin_addr, int origin_count,
const MPI::Datatype& origin_datatype,
int target_rank, MPI::Aint target_disp,
int target_count,
const MPI::Datatype& target_datatype) const
{
(void) MPI_Put(const_cast<void *>(origin_addr), origin_count, origin_datatype,
target_rank, target_disp, target_count,
target_datatype, mpi_win);
}
inline void
MPI::Win::Start(const MPI::Group& group, int assert) const
{
(void) MPI_Win_start(group, assert, mpi_win);
}
inline bool
MPI::Win::Test() const
{
int flag;
MPI_Win_test(mpi_win, &flag);
return OPAL_INT_TO_BOOL(flag);
}
inline void
MPI::Win::Unlock(int rank) const
{
(void) MPI_Win_unlock(rank, mpi_win);
}
inline void
MPI::Win::Wait() const
{
(void) MPI_Win_wait(mpi_win);
}
//
// External Interfaces
//
inline void
MPI::Win::Call_errhandler(int errorcode) const
{
(void) MPI_Win_call_errhandler(mpi_win, errorcode);
}
// 1) original Create_keyval that takes the first 2 arguments as C++
// functions
inline int
MPI::Win::Create_keyval(MPI::Win::Copy_attr_function* win_copy_attr_fn,
MPI::Win::Delete_attr_function* win_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, NULL,
win_copy_attr_fn, win_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 2) overload Create_keyval to take the first 2 arguments as C
// functions
inline int
MPI::Win::Create_keyval(MPI_Win_copy_attr_function* win_copy_attr_fn,
MPI_Win_delete_attr_function* win_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(win_copy_attr_fn, win_delete_attr_fn,
NULL, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 3) overload Create_keyval to take the first 2 arguments as C++ & C
// functions
inline int
MPI::Win::Create_keyval(MPI::Win::Copy_attr_function* win_copy_attr_fn,
MPI_Win_delete_attr_function* win_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(NULL, win_delete_attr_fn,
win_copy_attr_fn, NULL,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
// 4) overload Create_keyval to take the first 2 arguments as C & C++
// functions
inline int
MPI::Win::Create_keyval(MPI_Win_copy_attr_function* win_copy_attr_fn,
MPI::Win::Delete_attr_function* win_delete_attr_fn,
void* extra_state)
{
// Back-end function does the heavy lifting
int ret, keyval;
ret = do_create_keyval(win_copy_attr_fn, NULL,
NULL, win_delete_attr_fn,
extra_state, keyval);
return (MPI_SUCCESS == ret) ? keyval : ret;
}
inline void
MPI::Win::Delete_attr(int win_keyval)
{
(void) MPI_Win_delete_attr(mpi_win, win_keyval);
}
inline void
MPI::Win::Free_keyval(int& win_keyval)
{
(void) MPI_Win_free_keyval(&win_keyval);
}
// version 1: pre-errata Get_attr (not correct, but probably nice to support
inline bool
MPI::Win::Get_attr(const Win& win, int win_keyval,
void* attribute_val) const
{
int ret;
(void) MPI_Win_get_attr(win, win_keyval, attribute_val, &ret);
return OPAL_INT_TO_BOOL(ret);
}
// version 2: post-errata Get_attr (correct, but no one seems to know about it)
inline bool
MPI::Win::Get_attr(int win_keyval, void* attribute_val) const
{
int ret;
(void) MPI_Win_get_attr(mpi_win, win_keyval, attribute_val, &ret);
return OPAL_INT_TO_BOOL(ret);
}
inline void
MPI::Win::Get_name(char* win_name, int& resultlen) const
{
(void) MPI_Win_get_name(mpi_win, win_name, &resultlen);
}
inline void
MPI::Win::Set_attr(int win_keyval, const void* attribute_val)
{
(void) MPI_Win_set_attr(mpi_win, win_keyval, const_cast<void *>(attribute_val));
}
inline void
MPI::Win::Set_name(const char* win_name)
{
(void) MPI_Win_set_name(mpi_win, const_cast<char *>(win_name));
}

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Abort 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -32,12 +33,6 @@ MPI_Abort(\fIcomm\fP, \fIerrorcode\fP, \fIierror\fP)
INTEGER, INTENT(IN) :: \fIerrorcode\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Comm::Abort(int \fIerrorcode\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -67,7 +62,7 @@ The long-term goal of the Open MPI implementation is to terminate all processes
Note: All associated processes are sent a SIGTERM.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Accumulate 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -69,15 +70,6 @@ MPI_Raccumulate(\fIorigin_addr\fP, \fIorigin_count\fP, \fIorigin_datatype\fP, \f
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Win::Accumulate(const void* \fIorigin_addr\fP, int \fIorigin_count\fP,
const MPI::Datatype& \fIorigin_datatype\fP, int \fItarget_rank\fP,
MPI::Aint \fItarget_disp\fP, int \fItarget_count\fP, const MPI::Datatype&
\fItarget_datatype\fP, const MPI::Op& \fIop\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -159,7 +151,7 @@ that accesses to the window are properly aligned according to the data
type arguments in the call to the \fBMPI_Accumulate\fP function.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Add_error_class 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -34,12 +35,6 @@ MPI_Add_error_class(\fIerrorclass\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIerrorclass\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int MPI::Add_error_class()
.fi
.SH OUTPUT PARAMETERS
.ft R
@ -77,10 +72,7 @@ The value returned is always greater than or equal to MPI_ERR_LASTCODE.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Add_error_code 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -34,12 +35,6 @@ MPI_Add_error_code(\fIerrorclass\fP, \fIerrorcode\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIerrorcode\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int MPI::Add_error_code(int \fIerrorclass\fP, int* \fIerrorcode\fP)
.fi
.SH INPUT PARAMETER
.ft R
@ -71,10 +66,7 @@ The value returned is always greater than or equal to MPI_ERR_LASTCODE.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Add_error_string 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -37,12 +38,6 @@ MPI_Add_error_string(\fIerrorcode\fP, \fIstring\fP, \fIierror\fP)
CHARACTER(LEN=*), INTENT(IN) :: \fIstring\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Add_error_string(int \fIerrorcode\fP, const char* \fIstring\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -73,10 +68,7 @@ greater than MPI_LAST_ERRCODE).
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Address 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -44,8 +45,6 @@ Fortran only: Error status (integer).
.ft R
Note that use of this routine is \fIdeprecated\fP as of MPI-2. Please use MPI_Get_address instead.
.sp
This deprecated routine is not available in C++.
.sp
The address of a location in memory can be found by invoking this function. Returns the (byte) address of location.
.sp
Example: Using MPI_Address for an array.
@ -82,7 +81,7 @@ MPI_Address to "reference" C variables guarantees portability to
such machines as well.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Allgather 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -61,14 +62,6 @@ MPI_Iallgather(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \f
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Allgather(const void* \fIsendbuf\fP, int \fIsendcount\fP, const
MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP, int \fIrecvcount\fP,
const MPI::Datatype& \fIrecvtype\fP) const = 0
.fi
.SH INPUT PARAMETERS
.ft R
@ -157,7 +150,7 @@ When the communicator is an inter-communicator, the gather operation occurs in t
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2007-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Allgatherv 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -63,15 +64,6 @@ MPI_Iallgatherv(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Allgatherv(const void* \fIsendbuf\fP, int \fIsendcount\fP,
const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,
const int \fIrecvcounts\fP[], const int \fIdispls\fP[],
const MPI::Datatype& \fIrecvtype\fP) const = 0
.fi
.SH INPUT PARAMETERS
.ft R
@ -145,7 +137,7 @@ When the communicator is an inter-communicator, the gather operation occurs in t
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Alloc_mem 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -34,12 +35,6 @@ MPI_Alloc_mem(\fIsize\fP, \fIinfo\fP, \fIbaseptr\fP, \fIierror\fP)
TYPE(C_PTR), INTENT(OUT) :: \fIbaseptr\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void* MPI::Alloc_mem(MPI::Aint \fIsize\fP, const MPI::Info& \fIinfo\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -101,7 +96,7 @@ For example,
.ft R
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2007-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Allreduce 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -57,14 +58,6 @@ MPI_Iallreduce(\fIsendbuf\fP, \fIrecvbuf\fP, \fIcount\fP, \fIdatatype\fP, \fIop\
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Allreduce(const void* \fIsendbuf\fP, void* \fIrecvbuf\fP,
int \fIcount\fP, const MPI::Datatype& \fIdatatype\fP, const
MPI::Op& \fIop\fP) const=0
.fi
.SH INPUT PARAMETERS
.ft R
@ -174,7 +167,7 @@ to something else, for example,
then no error may be indicated.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Alltoall 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -67,14 +68,6 @@ MPI_Ialltoall(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \fI
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Alltoall(const void* \fIsendbuf\fP, int \fIsendcount\fP,
const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,
int \fIrecvcount\fP, const MPI::Datatype& \fIrecvtype\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -159,10 +152,7 @@ different datatypes.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Alltoallv 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -73,16 +74,6 @@ MPI_Ialltoallv(\fIsendbuf\fP, \fIsendcounts\fP, \fIsdispls\fP, \fIsendtype\fP, \
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Alltoallv(const void* \fIsendbuf\fP,
const int \fIsendcounts\fP[], const int \fIdispls\fP[],
const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,
const int \fIrecvcounts\fP[], const int \fIrdispls\fP[],
const MPI::Datatype& \fIrecvtype\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -192,10 +183,7 @@ MPI_Alltoallw, where these offsets are measured in bytes.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Alltoallw 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
@ -76,16 +77,6 @@ MPI_Ialltoallw(\fIsendbuf\fP, \fIsendcounts\fP, \fIsdispls\fP, \fIsendtypes\fP,
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Alltoallw(const void* \fIsendbuf\fP,
const int \fIsendcounts\fP[], const int \fIsdispls\fP[],
const MPI::Datatype \fIsendtypes\fP[], void* \fIrecvbuf\fP,
const int \fIrecvcounts\fP[], const int \fIrdispls\fP[],
const MPI::Datatype \fIrecvtypes\fP[])
.fi
.SH INPUT PARAMETERS
.ft R
@ -196,10 +187,7 @@ of \fIsendtype\fP and \fIrecvtype\fP, respectively.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Attr_delete 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -41,7 +42,7 @@ Fortran only: Error status (integer).
.SH DESCRIPTION
Note that use of this routine is \fIdeprecated\fP as of MPI-2, and
was \fIdeleted\fP in MPI-3. Please use MPI_Comm_delete_attr. This
function does not have a C++ or mpi_f08 binding.
function does not have a mpi_f08 binding.
.sp
Delete attribute from cache by key. This function invokes the attribute delete function delete_fn specified when the keyval was created. The call will fail if the delete_fn function returns an error code other than MPI_SUCCESS.
@ -57,7 +58,7 @@ is being invoked.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Attr_get 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -51,13 +52,13 @@ Fortran only: Error status (integer).
.ft R
Note that use of this routine is \fIdeprecated\fP as of MPI-2, and
was \fIdeleted\fP in MPI-3. Please use MPI_Comm_get_attr. This
function does not have a C++ or mpi_f08 binding.
function does not have a mpi_f08 binding.
.sp
Retrieves attribute value by key. The call is erroneous if there is no key
with value keyval. On the other hand, the call is correct if the key value exists, but no attribute is attached on comm for that key; in such case, the call returns flag = false. In particular MPI_KEYVAL_INVALID is an erroneous key value.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Attr_put 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -45,7 +46,7 @@ Fortran only: Error status (integer).
.ft R
Note that use of this routine is \fIdeprecated\fP as of MPI-2, and
was \fIdeleted\fP in MPI-3. Please use MPI_Comm_set_attr. This
function does not have a C++ or mpi_f08 binding.
function does not have a mpi_f08 binding.
.sp
MPI_Attr_put stores the stipulated attribute value attribute_val for subsequent retrieval by MPI_Attr_get. If the value is already present, then the outcome is as if MPI_Attr_delete was first called to delete the previous value (and the callback function delete_fn was executed), and a new value was next stored. The call is erroneous if there is no key with value keyval; in particular MPI_KEYVAL_INVALID is an erroneous key value. The call will fail if the delete_fn function returned an error code other than MPI_SUCCESS.
@ -59,7 +60,7 @@ The type of the attribute value depends on whether C or Fortran is being used. I
If an attribute is already present, the delete function (specified when the corresponding keyval was created) will be called.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2014-2015 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Barrier 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -40,12 +41,6 @@ MPI_Ibarrier(\fIcomm\fP, \fIrequest\fP, \fIierror\fP)
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Barrier() const = 0
.fi
.SH INPUT PARAMETER
.ft R
@ -72,7 +67,7 @@ barrier.
When the communicator is an inter-communicator, the barrier operation is performed across all processes in both groups. All processes in the first group may exit the barrier when all processes in the second group have entered the barrier.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Bcast 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -50,13 +51,6 @@ MPI_Ibcast(\fIbuffer\fP, \fIcount\fP, \fIdatatype\fP, \fIroot\fP, \fIcomm\fP, \f
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Bcast(void* \fIbuffer\fP, int \fIcount\fP,
const MPI::Datatype& \fIdatatype\fP, int \fIroot\fP) const = 0
.fi
.SH INPUT/OUTPUT PARAMETERS
.ft R
@ -113,7 +107,7 @@ This function does not support the in-place option.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Bsend 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -36,13 +37,6 @@ MPI_Bsend(\fIbuf\fP, \fIcount\fP, \fIdatatype\fP, \fIdest\fP, \fItag\fP, \fIcomm
TYPE(MPI_Comm), INTENT(IN) :: \fIcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Comm::Bsend(const void* \fIbuf\fP, int \fIcount\fP, const
Datatype& \fIdatatype\fP, int \fIdest\fP, int \fItag\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -109,7 +103,7 @@ delivered.)
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Bsend_init 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -39,13 +40,6 @@ MPI_Bsend_init(\fIbuf\fP, \fIcount\fP, \fIdatatype\fP, \fIdest\fP, \fItag\fP, \f
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
Prequest Comm::Bsend_init(const void* \fIbuf\fP, int \fIcount\fP, const
Datatype& \fIdatatype\fP, int \fIdest\fP, int \fItag\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -85,7 +79,7 @@ Creates a persistent communication request for a buffered mode send, and binds t
A communication (send or receive) that uses a persistent request is initiated by the function MPI_Start.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Buffer_attach 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -32,12 +33,6 @@ MPI_Buffer_attach(\fIbuffer\fP, \fIsize\fP, \fIierror\fP)
INTEGER, INTENT(IN) :: \fIsize\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Attach_buffer(void* \fIbuffer\fP, int \fIsize\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -80,7 +75,7 @@ the value of size in the MPI_Buffer_attach call should be greater than the value
MPI_BSEND_OVERHEAD gives the maximum amount of buffer space that may be used by the Bsend routines. This value is in mpi.h for C and mpif.h for Fortran.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Buffer_detach 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -33,12 +34,6 @@ MPI_Buffer_detach(\fIbuffer_addr\fP, \fIsize\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIsize\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int Detach_buffer(void*& \fIbuffer\fP)
.fi
.SH OUTPUT PARAMETERS
.ft R
@ -97,7 +92,7 @@ Even though the C functions MPI_Buffer_attach and
MPI_Buffer_detach both have a first argument of type void*, these arguments are used differently: A pointer to the buffer is passed to MPI_Buffer_attach; the address of the pointer is passed to MPI_Buffer_detach, so that this call can return the pointer value.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cancel 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -30,12 +31,6 @@ MPI_Cancel(\fIrequest\fP, \fIierror\fP)
TYPE(MPI_Request), INTENT(IN) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Request::Cancel() const
.fi
.SH INPUT PARAMETER
.ft R
@ -73,7 +68,7 @@ computation completes, some of these requests may remain;
using MPI_Cancel allows the user to cancel these unsatisfied requests.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_coords 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -33,13 +34,6 @@ MPI_Cart_coords(\fIcomm\fP, \fIrank\fP, \fImaxdims\fP, \fIcoords\fP, \fIierror\f
INTEGER, INTENT(OUT) :: \fIcoords(maxdims)\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Cartcomm::Get_coords(int \fIrank\fP, int \fImaxdims\fP,
int \fIcoords\fP[]) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -68,7 +62,7 @@ Fortran only: Error status (integer).
MPI_Cart_coords provies a mapping of ranks to Cartesian coordinates.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_create 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -37,13 +38,6 @@ MPI_Cart_create(\fIcomm_old\fP, \fIndims\fP, \fIdims\fP, \fIperiods\fP, \fIreord
TYPE(MPI_Comm), INTENT(OUT) :: \fIcomm_cart\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
Cartcomm Intracomm.Create_cart(int \fIndims\fP, int[] \fIdims\fP[],
const bool \fIperiods\fP[], bool \fIreorder\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -80,7 +74,7 @@ Fortran only: Error status (integer).
MPI_Cart_create returns a handle to a new communicator to which the Cartesian topology information is attached. If reorder = false then the rank of each process in the new group is identical to its rank in the old group. Otherwise, the function may reorder the processes (possibly so as to choose a good embedding of the virtual topology onto the physical machine). If the total size of the Cartesian grid is smaller than the size of the group of comm, then some processes are returned MPI_COMM_NULL, in analogy to MPI_Comm_split. The call is erroneous if it specifies a grid that is larger than the group size.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2014 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_get 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -35,13 +36,6 @@ MPI_Cart_get(\fIcomm\fP, \fImaxdims\fP, \fIdims\fP, \fIperiods\fP, \fIcoords\fP,
LOGICAL, INTENT(OUT) :: \fIperiods(maxdims)\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Cartcomm::Get_topo(int \fImaxdims\fP, int \fIdims\fP[],
bool \fIperiods\fP[], int \fIcoords\fP[]) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -73,7 +67,7 @@ Fortran only: Error status (integer).
The functions MPI_Cartdim_get and MPI_Cart_get return the Cartesian topology information that was associated with a communicator by MPI_Cart_create.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_map 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -36,13 +37,6 @@ MPI_Cart_map(\fIcomm\fP, \fIndims\fP, \fIdims\fP, \fIperiods\fP, \fInewrank\fP,
INTEGER, INTENT(OUT) :: \fInewrank\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int Cartcomm::Map(int \fIndims\fP, const int \fIdims\fP[],
const bool \fIperiods\fP[]) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -77,7 +71,7 @@ MPI_Cart_map and MPI_Graph_map can be used to implement all other topology funct
MPI_Cart_map computes an "optimal" placement for the calling process on the physical machine. A possible implementation of this function is to always return the rank of the calling process, that is, not to perform any reordering.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_rank 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -32,12 +33,6 @@ MPI_Cart_rank(\fIcomm\fP, \fIcoords\fP, \fIrank\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIrank\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int Cartcomm::Get_cart_rank(const int \fIcoords\fP[]) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -64,7 +59,7 @@ For a process group with Cartesian structure, the function MPI_Cart_rank
translates the logical process coordinates to process ranks as they are used by the point-to-point routines. For dimension i with periods(i) = true, if the coordinate, coords(i), is out of range, that is, coords(i) < 0 or coords(i) >= dims(i), it is shifted back to the interval 0 =< coords(i) < dims(i) automatically. Out-of-range coordinates are erroneous for nonperiodic dimensions.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_shift 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -35,13 +36,6 @@ MPI_Cart_shift(\fIcomm\fP, \fIdirection\fP, \fIdisp\fP, \fIrank_source\fP, \fIra
INTEGER, INTENT(OUT) :: \fIrank_source\fP, \fIrank_dest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void Cartcomm::Shift(int \fIdirection\fP, int \fIdisp\fP, int& \fIrank_source\fP,
int& \fIrank_dest\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -100,7 +94,7 @@ Depending on the periodicity of the Cartesian group in the specified coordinate
In Fortran, the dimension indicated by DIRECTION = i has DIMS(i+1) nodes, where DIMS is the array that was used to create the grid. In C, the dimension indicated by direction = i is the dimension specified by dims[i].
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cart_sub 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -34,12 +35,6 @@ MPI_Cart_sub(\fIcomm\fP, \fIremain_dims\fP, \fInewcomm\fP, \fIierror\fP)
TYPE(MPI_Comm), INTENT(OUT) :: \fInewcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
Cartcomm Cartcomm::Sub(const bool \fIremain_dims\fP[]) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -73,7 +68,7 @@ If a Cartesian topology has been created with MPI_Cart_create, the function MPI
will create three communicators, each with eight processes in a 2 x 4 Cartesian topology. If remain_dims = (false, false, true) then the call to MPI_Cart_sub(comm, remain_dims, comm_new) will create six nonoverlapping communicators, each with four processes, in a one-dimensional Cartesian topology.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Cartdim_get 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -31,12 +32,6 @@ MPI_Cartdim_get(\fIcomm\fP, \fIndims\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIndims\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
int Cartcomm::Get_dim() const
.fi
.SH INPUT PARAMETER
.ft R
@ -59,7 +54,7 @@ Fortran only: Error status (integer).
MPI_Cartdim_get returns the number of dimensions of the Cartesian structure.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Close_port 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -32,12 +33,6 @@ MPI_Close_port(\fIport_name\fP, \fIierror\fP)
CHARACTER(LEN=*), INTENT(IN) :: \fIport_name\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Close_port(const char* \fIport_name\fP)
.fi
.SH INPUT PARAMETER
.ft R
@ -56,7 +51,7 @@ Fortran only: Error status (integer).
MPI_Close_port releases the network address represented by \fIport_name\fP.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2009-2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2007, Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_accept 3OpenMPI "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -36,13 +37,6 @@ MPI_Comm_accept(\fIport_name\fP, \fIinfo\fP, \fIroot\fP, \fIcomm\fP, \fInewcomm\
TYPE(MPI_Comm), INTENT(OUT) :: \fInewcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
MPI::Intercomm MPI::Intracomm::Accept(const char* \fIport_name\fP,
const MPI::Info& \fIinfo\fP, int \fIroot\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -76,7 +70,7 @@ The \fIport_name\fP must have been established through a call to MPI_Open_port o
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_call_errhandler 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -34,12 +35,6 @@ MPI_Comm_call_errhandler(\fIcomm\fP, \fIerrorcode\fP, \fIierror\fP)
INTEGER, INTENT(IN) :: \fIerrorcode\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Call_errhandler(int \fIerrorcode\fP) const
.fi
.SH INPUT PARAMETER
.ft R
@ -74,10 +69,7 @@ changed.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument. C++
functions do not return errors. If the default error handler is set to
MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
will be used to throw an MPI::Exception object.
the value of the function and Fortran routines in the last argument.
.sp
See the MPI man page for a full list of MPI error codes.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_compare 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -31,12 +32,6 @@ MPI_Comm_compare(\fIcomm1\fP, \fIcomm2\fP, \fIresult\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIresult\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
static int Comm::Compare(const Comm& \fIcomm1\fP, const Comm& \fIcomm2\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -62,7 +57,7 @@ Fortran only: Error status (integer).
MPI_IDENT results if and only if comm1 and comm2 are handles for the same object (identical groups and same contexts). MPI_CONGRUENT results if the underlying groups are identical in constituents and rank order; these communicators differ only by context. MPI_SIMILAR results of the group members of both communicators are the same but the rank order differs. MPI_UNEQUAL results otherwise.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2007-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_connect 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -37,13 +38,6 @@ MPI_Comm_connect(\fIport_name\fP, \fIinfo\fP, \fIroot\fP, \fIcomm\fP, \fInewcomm
TYPE(MPI_Comm), INTENT(OUT) :: \fInewcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
MPI::Intercomm MPI::Intracomm::Connect(const char* \fIport_name\fP,
const MPI::Info& \fIinfo\fP, int \fIroot\fP) const
.fi
.SH INPUT PARAMETERS
.ft R
@ -81,7 +75,7 @@ The \fIport_name\fP parameter is the address of the server. It must be the same
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -3,6 +3,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_create 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -33,14 +34,6 @@ MPI_Comm_create(\fIcomm\fP, \fIgroup\fP, \fInewcomm\fP, \fIierror\fP)
TYPE(MPI_Comm), INTENT(OUT) :: \fInewcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
MPI::Intercomm MPI::Intercomm::Create(const Group& \fIgroup\fP) const
MPI::Intracomm MPI::Intracomm::Create(const Group& \fIgroup\fP) const
.fi
.SH INPUT PARAMETER
.ft R
@ -83,7 +76,7 @@ order. Otherwise the call is erroneous.
MPI_Comm_create provides a means of making a subset of processes for the purpose of separate MIMD computation, with separate communication space. \fInewcomm\fR, which is created by MPI_Comm_create, can be used in subsequent calls to MPI_Comm_create (or other communicator constructors) to further subdivide a computation into parallel sub-computations. A more general service is provided by MPI_Comm_split.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2009-2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_create_errhandler 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -33,14 +34,6 @@ MPI_Comm_create_errhandler(\fIcomm_errhandler_fn\fP, \fIerrhandler\fP, \fIierror
TYPE(MPI_Errhandler), INTENT(OUT) :: \fIerrhandler\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
static MPI::Errhandler
MPI::Comm::Create_errhandler(MPI::Comm::Errhandler_function*
\fIfunction\fP)
.fi
.SH DEPRECATED TYPE NAME NOTE
.ft R
@ -85,15 +78,9 @@ In Fortran, the user routine should be of this form:
SUBROUTINE COMM_ERRHANDLER_FUNCTION(COMM, ERROR_CODE, \&...)
INTEGER COMM, ERROR_CODE
.fi
.sp
In C++, the user routine should be of this form:
.sp
.nf
typedef void MPI::Comm::Errhandler_function(MPI_Comm &, int *, \&...);
.fi
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -2,6 +2,7 @@
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Comm_create_keyval 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
@ -41,15 +42,6 @@ MPI_Comm_create_keyval(\fIcomm_copy_attr_fn\fP, \fIcomm_delete_attr_fn\fP, \fIco
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: \fIextra_state\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
static in MPI::Comm::Create_keyval(MPI::Comm::Copy_attr_function*
\fIcomm_copy_attr_fn\fP,
MPI::Comm::Delete_attr_function* \fIcomm_delete_attr_fn\fP,
void* \fIextra_state\fP)
.fi
.SH INPUT PARAMETERS
.ft R
@ -76,7 +68,7 @@ Fortran only: Error status (integer).
.ft R
This function replaces MPI_Keyval_create, the use of which is deprecated. The C binding is identical. The Fortran binding differs in that \fIextra_state\fP is an address-sized integer. Also, the copy and delete callback functions have Fortran bindings that are consistent with address-sized attributes.
.sp
The argument \fIcomm_copy_attr_fn\fP may be specified as MPI_COMM_NULL_COPY_FN or MPI_COMM_DUP_FN from C, C++, or Fortran. MPI_COMM_NULL_COPY_FN is a function that does nothing more than returning \fIflag\fP = 0 and MPI_SUCCESS. MPI_COMM_DUP_FN is a simple-minded copy function that sets \fIflag\fP = 1, returns the value of \fIattribute_val_in\fP in \fIattribute_val_out\fP, and returns MPI_SUCCESS. These replace the MPI-1 predefined callbacks MPI_NULL_COPY_FN and MPI_DUP_FN, the use of which is deprecated.
The argument \fIcomm_copy_attr_fn\fP may be specified as MPI_COMM_NULL_COPY_FN or MPI_COMM_DUP_FN from C or Fortran. MPI_COMM_NULL_COPY_FN is a function that does nothing more than returning \fIflag\fP = 0 and MPI_SUCCESS. MPI_COMM_DUP_FN is a simple-minded copy function that sets \fIflag\fP = 1, returns the value of \fIattribute_val_in\fP in \fIattribute_val_out\fP, and returns MPI_SUCCESS. These replace the MPI-1 predefined callbacks MPI_NULL_COPY_FN and MPI_DUP_FN, the use of which is deprecated.
.sp
The C callback functions are:
.sp
@ -110,19 +102,6 @@ SUBROUTINE COMM_DELETE_ATTR_FN(\fICOMM, COMM_KEYVAL, ATTRIBUTE_VAL, EXTRA_STATE,
INTEGER \fICOMM, COMM_KEYVAL, IERROR\fP
INTEGER(KIND=MPI_ADDRESS_KIND) \fIATTRIBUTE_VAL, EXTRA_STATE\fP
.fi
.sp
The C++ callbacks are:
.sp
.nf
typedef int MPI::Comm::Copy_attr_function(const MPI::Comm& \fIoldcomm\fP,
int \fIcomm_keyval\fP, void* \fIextra_state\fP, void* \fIattribute_val_in\fP,
void* \fIattribute_val_out\fP, bool& \fIflag\fP);
.fi
and
.nf
typedef int MPI::Comm::Delete_attr_function(MPI::Comm& \fIcomm\fP,
int \fIcomm_keyval\fP, void* \fIattribute_val\fP, void* \fIextra_state\fP);
.fi
.SH FORTRAN 77 NOTES
.ft R
@ -138,7 +117,7 @@ where MPI_ADDRESS_KIND is a constant defined in mpif.h
and gives the length of the declared integer in bytes.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше