1
1

* add --enable-mca-static to specify components that should be statically

linked into libmpi
* add --enable-mca-direct to specify components that should be directly
  called (instead of going through component structs and the like).  The
  components and component frameworks must explicitly support this.
  Currently, only the TEG PML does so.
* Updated all the calls to the PML to use a macro so that they can either
  be direct called or called through function pointer interfaces (aka
  the component infrastructure)

This commit was SVN r5291.
Этот коммит содержится в:
Brian Barrett 2005-04-13 03:19:48 +00:00
родитель a283072b22
Коммит 309ff000a6
92 изменённых файлов: 706 добавлений и 428 удалений

76
ISSUES
Просмотреть файл

@ -1,67 +1,9 @@
Undecided timing: - Need some type of template system for *_direct_call_headers.h file
----------------- so that it isn't touched every time it changes.
--> DONE
- if an MPI process fails (e.g., it seg faults), it causes orterun to - Error if we want direct call and the component fails to work or
hang. This is with the rsh pls. support direct calling
--> Looks like the problem is with what happens when you set the --> DONE
state of the process in the soh to ORTE_PROC_STATE_ABORTED. - configure way to know doing direct
--> Ralph is looking at this --> DONE
- ompi_info to know a component is direct call-able
- if the daemon is not found or fails to start, orterun will hang. No
indication is given to the users that something went wrong.
--> Brian thinks he fixed this, but since he sets the state to
ORTE_PROC_STATE_ABORTED, it won't be really clear until the
above issue is fixed. But it at least tells you what went
wrong.
- $prefix/etc/hosts vs. $prefix/etc/openmpi-default-hostfile
--> Brian temporarily added symlink in $prefix/etc/ for
openmpi-default-hostfile -> hosts if there isn't already
a hosts file so that he doesn't have to create one every
time he does "rm -rf $prefix && make install". Will file
bug so that this can be fixed (and will fix in the trunk)
Pre-milestone:
--------------
- singleton mpi doesn't work
- Ralph: Populate orte_finalize()
Post-milestone:
---------------
- ras_base_alloc: doesn't allow for oversubscribing like this:
eddie: cpu=2
vogon: cpu=2 max-slots=4
mpirun -np 6 uptime
It barfs because it tries to evenly divide the remaining unallocated
procs across all nodes (i.e., 1 each on eddie/vogon) rather than
seeing that vogon can take the remaining 2.
- Jeff: TM needs to be re-written to use daemons (can't hog TM
connection forever)
- Jeff: make the mapper be able to handle app->map_data
- Jeff: add function callback in cmd_line_t stuff
- Jeff: does cmd_line_t need to *get* MCA params if a command line
param is not taken but an MCA param is available?
- consider empty string problem...
- ?: Friendlier error messages (e.g., if no nodes -- need something
meaningful to tell the user)
- Ralph: compare and set function in GPR
- Jeff: collapse MCA params from 3 names to 1 name
- ?: Apply LANL copyright to trunk (post all merging activity)
- Probably during/after OMPI/ORTE split:
- re-merge [orte|ompi]_pointer_array and [orte|ompi]_value_array

Просмотреть файл

@ -27,15 +27,82 @@ AC_DEFUN([OMPI_MCA],[
# --disable-mca-dso # --disable-mca-dso
# #
AC_MSG_CHECKING([which components should be run-time loadable])
AC_ARG_ENABLE(mca-dso, AC_ARG_ENABLE(mca-dso,
AC_HELP_STRING([--enable-mca-dso=LIST], AC_HELP_STRING([--enable-mca-dso=LIST],
[comma-separated list of types and/or type-component pairs that will be built as run-time loadable components (as opposed to statically linked in), if supported on this platform. The default is to build all components as DSOs; the --disable-mca-dso[=LIST] form can be used to disable building all or some types/components as DSOs])) [comma-separated list of types and/or
type-component pairs that will be built as
run-time loadable components (as opposed to
statically linked in), if supported on this
platform. The default is to build all components
as DSOs; the --disable-mca-dso[=LIST] form can be
used to disable building all or some
types/components as DSOs]))
AC_ARG_ENABLE(mca-static,
AC_HELP_STRING([--enable-mca-static=LIST],
[comma-separated list of types and/or
type-component pairs that will be built statically
linked into the library. The default (if DSOs are
supported) is to build all components as DSOs.
Enabling a component as static disables it
building as a DSO.]))
AC_ARG_ENABLE(mca-direct,
AC_HELP_STRING([--enable-mca-direct=LIST],
[comma-separated list of type-component pairs that
will be hard coded as the one component to use for
a given component type, saving the (small)
overhead of the component architecture. LIST must
not be empty and implies given component pairs are
build as static components.]))
# First, check to see if we're only building static libraries. If so, #
# then override everything and only build components as static # First, add all the mca-direct components / types into the mca-static
# libraries. # lists and create a list of component types that are direct compile,
# in the form DIRECT_[type]=[component]
#
AC_MSG_CHECKING([which components should be direct-linked into the library])
if test "$enable_mca_direct" = "yes" ; then
AC_MSG_RESULT([yes])
AC_MSG_ERROR([*** The enable-mca-direct flag requires an explicit list of
*** type-component pairs. For example, --enable-mca-direct=pml-teg,coll-basic])
elif test ! -z "$enable_mca_direct" -a "$enable_mca_direct" != "" ; then
#
# we need to add this into the static list, unless the static list
# is everything
#
if test "$enable_mca_static" = "no" ; then
AC_MSG_WARN([*** Re-enabling static component support for direct call])
enable_mca_static="$enable_mca_direct"
elif test -z "$enable_mca_static" ; then
enable_mca_static="$enable_mca_direct"
elif test "$enable_mca_static" != "yes" ; then
enable_mca_static="$enable_mca_direct,$enable_mca_static"
fi
ifs_save="$IFS"
IFS="${IFS}$PATH_SEPARATOR,"
msg=
for item in $enable_mca_direct; do
type="`echo $item | cut -f1 -d-`"
comp="`echo $item | cut -f2- -d-`"
if test -z $type -o -z $comp ; then
AC_MSG_ERROR([*** The enable-mca-direct flag requires a
*** list of type-component pairs. Invalid input detected.])
else
str="`echo DIRECT_$type=$comp | sed s/-/_/g`"
eval $str
msg="$item $msg"
fi
done
IFS="$ifs_save"
fi
AC_MSG_RESULT([$msg])
unset msg
#
# Second, set the DSO_all and STATIC_all variables. conflict
# resolution (prefer static) is done in the big loop below
#
AC_MSG_CHECKING([which components should be run-time loadable])
if test "$enable_shared" = "no"; then if test "$enable_shared" = "no"; then
DSO_all=0 DSO_all=0
msg=none msg=none
@ -51,9 +118,9 @@ else
IFS="${IFS}$PATH_SEPARATOR," IFS="${IFS}$PATH_SEPARATOR,"
msg= msg=
for item in $enable_mca_dso; do for item in $enable_mca_dso; do
str="`echo DSO_$item=1 | sed s/-/_/g`" str="`echo DSO_$item=1 | sed s/-/_/g`"
eval $str eval $str
msg="$item $msg" msg="$item $msg"
done done
IFS="$ifs_save" IFS="$ifs_save"
fi fi
@ -64,6 +131,29 @@ if test "$enable_shared" = "no"; then
AC_MSG_WARN([*** Building MCA components as DSOs automatically disabled]) AC_MSG_WARN([*** Building MCA components as DSOs automatically disabled])
fi fi
AC_MSG_CHECKING([which components should be static])
if test "$enable_mca_static" = "yes"; then
STATIC_all=1
msg=all
elif test -z "$enable_mca_static" -o "$enable_mca_static" = "no"; then
STATIC_all=0
msg=none
else
STATIC_all=0
ifs_save="$IFS"
IFS="${IFS}$PATH_SEPARATOR,"
msg=
for item in $enable_mca_static; do
str="`echo STATIC_$item=1 | sed s/-/_/g`"
eval $str
msg="$item $msg"
done
IFS="$ifs_save"
fi
AC_MSG_RESULT([$msg])
unset msg
# The list of MCA types (it's fixed) # The list of MCA types (it's fixed)
AC_MSG_CHECKING([for MCA types]) AC_MSG_CHECKING([for MCA types])
@ -101,10 +191,10 @@ for type in $found_types; do
fi fi
total_dir="." total_dir="."
for dir_part in `IFS='/\\'; set X $outdir; shift; echo "$[@]"`; do for dir_part in `IFS='/\\'; set X $outdir; shift; echo "$[@]"`; do
total_dir=$total_dir/$dir_part total_dir=$total_dir/$dir_part
test -d "$total_dir" || test -d "$total_dir" ||
mkdir "$total_dir" || mkdir "$total_dir" ||
AC_MSG_ERROR([cannot create $total_dir]) AC_MSG_ERROR([cannot create $total_dir])
done done
# Also ensure that the dynamic-mca base directory exists # Also ensure that the dynamic-mca base directory exists
@ -112,10 +202,10 @@ for type in $found_types; do
total_dir="." total_dir="."
dyndir=src/dynamic-mca/$type dyndir=src/dynamic-mca/$type
for dir_part in `IFS='/\\'; set X $dyndir; shift; echo "$[@]"`; do for dir_part in `IFS='/\\'; set X $dyndir; shift; echo "$[@]"`; do
total_dir=$total_dir/$dir_part total_dir=$total_dir/$dir_part
test -d "$total_dir" || test -d "$total_dir" ||
mkdir "$total_dir" || mkdir "$total_dir" ||
AC_MSG_ERROR([cannot create $total_dir]) AC_MSG_ERROR([cannot create $total_dir])
done done
# Remove any previous generated #include files. # Remove any previous generated #include files.
@ -133,16 +223,31 @@ for type in $found_types; do
case "$type" in case "$type" in
crmpi) crmpi)
generic_type="cr" generic_type="cr"
;; ;;
crompi) crompi)
generic_type="cr" generic_type="cr"
;; ;;
*) *)
generic_type="$type" generic_type="$type"
;; ;;
esac esac
# set the direct / no direct flag
str="DIRECT_COMPONENT=\$DIRECT_${type}"
eval $str
if test ! -z "$DIRECT_COMPONENT" ; then
str="MCA_${type}_DIRECT_CALL_COMPONENT=$DIRECT_COMPONENT"
eval $str
str="MCA_${type}_DIRECT_CALL=1"
eval $str
else
str="MCA_${type}_DIRECT_CALL_COMPONENT="
eval $str
str="MCA_${type}_DIRECT_CALL=0"
eval $str
fi
# Iterate through the list of no-configure components # Iterate through the list of no-configure components
foo="found_components=\$MCA_${type}_NO_CONFIGURE_SUBDIRS" foo="found_components=\$MCA_${type}_NO_CONFIGURE_SUBDIRS"
@ -152,30 +257,40 @@ for type in $found_types; do
m=`basename "$component"` m=`basename "$component"`
# build if: # build if:
# - the component type is direct and we are that component
# - there is no ompi_ignore file # - there is no ompi_ignore file
# - there is an ompi_ignore, but there is an empty ompi_unignore # - there is an ompi_ignore, but there is an empty ompi_unignore
# - there is an ompi_ignore, but username is in ompi_unignore # - there is an ompi_ignore, but username is in ompi_unignore
if test -d $srcdir/$component ; then if test -d $srcdir/$component ; then
# decide if we want the component to be built or not. This # decide if we want the component to be built or not. This
# is spread out because some of the logic is a little complex # is spread out because some of the logic is a little complex
# and test's syntax isn't exactly the greatest. We want to # and test's syntax isn't exactly the greatest. We want to
# build the component by default. # build the component by default.
want_component=1 want_component=1
if test -f $srcdir/$component/.ompi_ignore ; then if test -f $srcdir/$component/.ompi_ignore ; then
# If there is an ompi_ignore file, don't build # If there is an ompi_ignore file, don't build
# the component. Note that this decision can be # the component. Note that this decision can be
# overriden by the unignore logic below. # overriden by the unignore logic below.
want_component=0 want_component=0
fi fi
if test -f $srcdir/$component/.ompi_unignore ; then if test -f $srcdir/$component/.ompi_unignore ; then
# if there is an empty ompi_unignore, that is # if there is an empty ompi_unignore, that is
# equivalent to having your userid in the unignore file. # equivalent to having your userid in the unignore file.
# If userid is in the file, unignore the ignore file. # If userid is in the file, unignore the ignore file.
if test ! -s $srcdir/$component/.ompi_unignore ; then if test ! -s $srcdir/$component/.ompi_unignore ; then
want_component=1 want_component=1
elif test ! -z "`grep $USER $srcdir/$component/.ompi_unignore`" ; then elif test ! -z "`grep $USER $srcdir/$component/.ompi_unignore`" ; then
want_component=1 want_component=1
fi fi
fi
# if this component type is direct and we are not it, we don't want
# to be built. Otherwise, we do want to be built.
if test ! -z "$DIRECT_COMPONENT" ; then
if test "$DIRECT_COMPONENT" = "$m" ; then
want_component=1
else
want_component=0
fi
fi fi
if test "$want_component" = "1" ; then if test "$want_component" = "1" ; then
ompi_show_subtitle "MCA component $type:$m (no configure script)" ompi_show_subtitle "MCA component $type:$m (no configure script)"
@ -203,6 +318,19 @@ for type in $found_types; do
fi fi
foo="BUILD_${type}_${m}_DSO=$value" foo="BUILD_${type}_${m}_DSO=$value"
eval $foo eval $foo
# double check that we can build direct if that was requested
# DIRECT_CALL_HEADER *must* be defined by the component
# (in its post configure) if it
# can be direct built, so we use that as a keyword to tell us
# whether the component was successfully setup or not
if test "$DIRECT_COMPONENT" = "$m" -a \
-z "$MCA_${type}_DIRECT_CALL_HEADER" ; then
AC_MSG_ERROR([${type} component ${m} was requested to be directly linked
into libmpi, but does not support such a configuration. Please choose
another ${type} component for direct compilation or allow all components
of type ${type} to be loaded at runtime.])
fi
fi fi
fi fi
done done
@ -211,24 +339,36 @@ for type in $found_types; do
# etc. # etc.
for component in $srcdir/src/mca/$type/*; do for component in $srcdir/src/mca/$type/*; do
FOUND=0 FOUND=0
HAPPY=0 HAPPY=0
m="`basename $component`" m="`basename $component`"
# build if: # build if:
# - the component type is direct and we are that component
# - there is no ompi_ignore file # - there is no ompi_ignore file
# - there is an ompi_ignore, but there is an empty ompi_unignore # - there is an ompi_ignore, but there is an empty ompi_unignore
# - there is an ompi_ignore, but username is in ompi_unignore # - there is an ompi_ignore, but username is in ompi_unignore
if test -d $component -a -x $component/configure ; then if test -d $component -a -x $component/configure ; then
want_component=1 want_component=1
if test -f $srcdir/$component/.ompi_ignore ; then if test -f $srcdir/$component/.ompi_ignore ; then
want_component=0 want_component=0
fi fi
if test -f $srcdir/$component/.ompi_unignore ; then if test -f $srcdir/$component/.ompi_unignore ; then
if test ! -s $srcdir/$component/.ompi_unignore ; then if test ! -s $srcdir/$component/.ompi_unignore ; then
want_component=1 want_component=1
elif test ! -z "`grep $USER $srcdir/$component/.ompi_unignore`" ; then elif test ! -z "`grep $USER $srcdir/$component/.ompi_unignore`" ; then
want_component=1 want_component=1
fi fi
fi
# if this component type is direct and we are not it, we don't want
# to be built. Otherwise, we do want to be built.
if test ! -z "$DIRECT_COMPONENT" ; then
if test "$DIRECT_COMPONENT" = "$m" ; then
# BWB - need some check in here to make sure component
# can be built direct!
want_component=1
else
want_component=0
fi
fi fi
if test "$want_component" = "1" ; then if test "$want_component" = "1" ; then
ompi_show_subtitle "MCA component $type:$m (need to configure)" ompi_show_subtitle "MCA component $type:$m (need to configure)"
@ -247,11 +387,20 @@ for type in $found_types; do
[$ompi_subdir_args], [$ompi_subdir_args],
[HAPPY=1], [HAPPY=0]) [HAPPY=1], [HAPPY=0])
fi fi
fi fi
# Process this component # Process this component
MCA_PROCESS_COMPONENT($FOUND, $HAPPY, $type, $m) MCA_PROCESS_COMPONENT($FOUND, $HAPPY, $type, $m)
# double check that we can build direct if that was requested
if test "$DIRECT_COMPONENT" = "$m" -a \
-z "$MCA_${type}_DIRECT_CALL_HEADER" ; then
AC_MSG_ERROR([${type} component ${m} was requested to be directly linked
into libmpi, but does not support such a configuration. Please choose
another ${type} component for direct compilation or allow all components
of type ${type} to be loaded at runtime.])
fi
done done
# m4 weirdness: must also do the echo after the sort, or we get a # m4 weirdness: must also do the echo after the sort, or we get a
@ -376,6 +525,21 @@ AC_SUBST(MCA_pls_STATIC_SUBDIRS)
AC_SUBST(MCA_pls_DSO_SUBDIRS) AC_SUBST(MCA_pls_DSO_SUBDIRS)
AC_SUBST(MCA_pls_STATIC_LTLIBS) AC_SUBST(MCA_pls_STATIC_LTLIBS)
AC_SUBST(MCA_rml_ALL_SUBDIRS)
AC_SUBST(MCA_rml_STATIC_SUBDIRS)
AC_SUBST(MCA_rml_DSO_SUBDIRS)
AC_SUBST(MCA_rml_STATIC_LTLIBS)
AC_SUBST(MCA_soh_ALL_SUBDIRS)
AC_SUBST(MCA_soh_STATIC_SUBDIRS)
AC_SUBST(MCA_soh_DSO_SUBDIRS)
AC_SUBST(MCA_soh_STATIC_LTLIBS)
AC_SUBST(MCA_svc_ALL_SUBDIRS)
AC_SUBST(MCA_svc_STATIC_SUBDIRS)
AC_SUBST(MCA_svc_DSO_SUBDIRS)
AC_SUBST(MCA_svc_STATIC_LTLIBS)
# MPI types # MPI types
AC_SUBST(MCA_allocator_ALL_SUBDIRS) AC_SUBST(MCA_allocator_ALL_SUBDIRS)
@ -407,27 +571,13 @@ AC_SUBST(MCA_pml_ALL_SUBDIRS)
AC_SUBST(MCA_pml_STATIC_SUBDIRS) AC_SUBST(MCA_pml_STATIC_SUBDIRS)
AC_SUBST(MCA_pml_DSO_SUBDIRS) AC_SUBST(MCA_pml_DSO_SUBDIRS)
AC_SUBST(MCA_pml_STATIC_LTLIBS) AC_SUBST(MCA_pml_STATIC_LTLIBS)
OMPI_SETUP_DIRECT_CALL(pml)
AC_SUBST(MCA_ptl_ALL_SUBDIRS) AC_SUBST(MCA_ptl_ALL_SUBDIRS)
AC_SUBST(MCA_ptl_STATIC_SUBDIRS) AC_SUBST(MCA_ptl_STATIC_SUBDIRS)
AC_SUBST(MCA_ptl_DSO_SUBDIRS) AC_SUBST(MCA_ptl_DSO_SUBDIRS)
AC_SUBST(MCA_ptl_STATIC_LTLIBS) AC_SUBST(MCA_ptl_STATIC_LTLIBS)
AC_SUBST(MCA_rml_ALL_SUBDIRS)
AC_SUBST(MCA_rml_STATIC_SUBDIRS)
AC_SUBST(MCA_rml_DSO_SUBDIRS)
AC_SUBST(MCA_rml_STATIC_LTLIBS)
AC_SUBST(MCA_soh_ALL_SUBDIRS)
AC_SUBST(MCA_soh_STATIC_SUBDIRS)
AC_SUBST(MCA_soh_DSO_SUBDIRS)
AC_SUBST(MCA_soh_STATIC_LTLIBS)
AC_SUBST(MCA_svc_ALL_SUBDIRS)
AC_SUBST(MCA_svc_STATIC_SUBDIRS)
AC_SUBST(MCA_svc_DSO_SUBDIRS)
AC_SUBST(MCA_svc_STATIC_LTLIBS)
AC_SUBST(MCA_topo_ALL_SUBDIRS) AC_SUBST(MCA_topo_ALL_SUBDIRS)
AC_SUBST(MCA_topo_STATIC_SUBDIRS) AC_SUBST(MCA_topo_STATIC_SUBDIRS)
AC_SUBST(MCA_topo_DSO_SUBDIRS) AC_SUBST(MCA_topo_DSO_SUBDIRS)
@ -473,26 +623,42 @@ if test "$HAPPY" = "1"; then
str="SHARED_COMPONENT=\$DSO_${type}_$m" str="SHARED_COMPONENT=\$DSO_${type}_$m"
eval $str eval $str
str="STATIC_TYPE=\$STATIC_$type"
eval $str
str="STATIC_GENERIC_TYPE=\$STATIC_$generic_type"
eval $str
str="STATIC_COMPONENT=\$STATIC_${type}_$m"
eval $str
shared_mode_override=static shared_mode_override=static
# Setup for either shared or static # Setup for either shared or static
if test "$STATIC_TYPE" = "1" -o \
if test "$shared_mode_override" = "dso" -o \ "$STATIC_GENERIC_TYPE" = "1" -o \
"$SHARED_TYPE" = "1" -o \ "$STATIC_COMPONENT" = "1" -o \
"$SHARED_GENERIC_TYPE" = "1" -o \ "$STATIC_all" = "1" ; then
"$SHARED_COMPONENT" = "1" -o \ compile_mode="static"
"$DSO_all" = "1"; then elif test "$shared_mode_override" = "dso" -o \
compile_mode="dso" "$SHARED_TYPE" = "1" -o \
echo $m >> $outfile.dso "$SHARED_GENERIC_TYPE" = "1" -o \
rm -f "src/dynamic-mca/$type/$m" "$SHARED_COMPONENT" = "1" -o \
$LN_S "$OMPI_TOP_BUILDDIR/src/mca/$type/$m" \ "$DSO_all" = "1"; then
"src/dynamic-mca/$type/$m" compile_mode="dso"
else else
static_ltlibs="mca/$type/$m/libmca_${type}_${m}.la $static_ltlibs" compile_mode="static"
echo "extern const mca_base_component_t mca_${type}_${m}_component;" >> $outfile.extern fi
echo " &mca_${type}_${m}_component, " >> $outfile.struct
compile_mode="static" if test "$compile_mode" = "dso" ; then
echo $m >> $outfile.static echo $m >> $outfile.dso
rm -f "src/dynamic-mca/$type/$m"
$LN_S "$OMPI_TOP_BUILDDIR/src/mca/$type/$m" \
"src/dynamic-mca/$type/$m"
else
static_ltlibs="mca/$type/$m/libmca_${type}_${m}.la $static_ltlibs"
echo "extern const mca_base_component_t mca_${type}_${m}_component;" >> $outfile.extern
echo " &mca_${type}_${m}_component, " >> $outfile.struct
echo $m >> $outfile.static
fi fi
# Output pretty results # Output pretty results
@ -500,20 +666,24 @@ if test "$HAPPY" = "1"; then
AC_MSG_CHECKING([if MCA component $type:$m can compile]) AC_MSG_CHECKING([if MCA component $type:$m can compile])
AC_MSG_RESULT([yes]) AC_MSG_RESULT([yes])
AC_MSG_CHECKING([for MCA component $type:$m compile mode]) AC_MSG_CHECKING([for MCA component $type:$m compile mode])
AC_MSG_RESULT([$compile_mode]) if test "$DIRECT_COMPONENT" = "$m" ; then
AC_MSG_RESULT([$compile_mode - direct])
else
AC_MSG_RESULT([$compile_mode])
fi
# If there's an output file, add the values to # If there's an output file, add the values to
# scope_EXTRA_flags. # scope_EXTRA_flags.
if test -f $infile; then if test -f $infile; then
# First check for the ABORT tag # First check for the ABORT tag
line="`grep ABORT= $infile | cut -d= -f2-`" line="`grep ABORT= $infile | cut -d= -f2-`"
if test -n "$line" -a "$line" != "no"; then if test -n "$line" -a "$line" != "no"; then
AC_MSG_WARN([MCA component configure script told me to abort]) AC_MSG_WARN([MCA component configure script told me to abort])
AC_MSG_ERROR([cannot continue]) AC_MSG_ERROR([cannot continue])
fi fi
# If we're not compiling statically, then only take the # If we're not compiling statically, then only take the
# "ALWAYS" tags (a uniq will be performed at the end -- no # "ALWAYS" tags (a uniq will be performed at the end -- no
@ -559,13 +729,34 @@ if test "$HAPPY" = "1"; then
for flags in CFLAGS CXXFLAGS FFLAGS FCFLAGS LDFLAGS LIBS; do for flags in CFLAGS CXXFLAGS FFLAGS FCFLAGS LDFLAGS LIBS; do
var="WRAPPER_EXTRA_${flags}" var="WRAPPER_EXTRA_${flags}"
line="`grep $var= $infile | cut -d= -f2-`" line="`grep $var= $infile | cut -d= -f2-`"
eval "line=$line" eval "line=$line"
if test -n "$line"; then if test -n "$line"; then
str="$var="'"$'"$var $line"'"' str="$var="'"$'"$var $line"'"'
eval $str eval $str
fi fi
done done
fi fi
dnl check for direct call header to include. This will be
dnl AC_SUBSTed later.
if test "$DIRECT_COMPONENT" = "$m" ; then
if test "`grep DIRECT_CALL_HEADER $infile`" != "" ; then
line="`grep DIRECT_CALL_HEADER $infile | cut -d= -f2-`"
str="MCA_${type}_DIRECT_CALL_HEADER=\\\"$line\\\""
eval $str
else
AC_MSG_ERROR([*** ${type} component ${m} was supposed to be direct-called, but
*** does not appear to support direct calling.
*** Aborting])
fi
fi
else
# were we supposed to have found something in the
# post_configure.sh, but the file didn't exist?
if test "$DIRECT_COMPONENT" = "$m" ; then
AC_MSG_ERROR([*** ${type} component ${m} was supposed to be direct-called, but
*** does not appear to support direct calling.
*** Aborting])
fi
fi fi
elif test "$FOUND" = "1"; then elif test "$FOUND" = "1"; then
AC_MSG_CHECKING([if MCA component $type:$m can compile]) AC_MSG_CHECKING([if MCA component $type:$m can compile])
@ -579,10 +770,39 @@ elif test "$FOUND" = "1"; then
str="bar="'"$'"with_$generic_type"'"' str="bar="'"$'"with_$generic_type"'"'
eval $str eval $str
if test "$foo" = "$m" -o "$bar" = "$m"; then if test "$foo" = "$m" -o "$bar" = "$m"; then
AC_MSG_WARN([MCA component "$m" failed to configure properly]) AC_MSG_WARN([MCA component "$m" failed to configure properly])
AC_MSG_WARN([This component was selected as the default]) AC_MSG_WARN([This component was selected as the default])
AC_MSG_ERROR([Cannot continue]) AC_MSG_ERROR([Cannot continue])
exit 1 exit 1
fi fi
fi fi
]) ])
AC_DEFUN([OMPI_SETUP_DIRECT_CALL],[
AC_SUBST(MCA_$1_DIRECT_CALL_HEADER)
AC_DEFINE_UNQUOTED([MCA_$1_DIRECT_CALL], [$MCA_$1_DIRECT_CALL],
[Defined to 1 if $1 should use direct calls instead of components])
AC_DEFINE_UNQUOTED([MCA_$1_DIRECT_CALL_COMPONENT], [$MCA_$1_DIRECT_CALL_COMPONENT],
[name of component to use for direct calls, if MCA_$1_DIRECT_CALL is 1])
OMPI_WRITE_DIRECT_CALL_HEADER($1)
])
AC_DEFUN([OMPI_WRITE_DIRECT_CALL_HEADER],[
AC_CONFIG_FILES(src/mca/$1/$1_direct_call.h.template)
AC_CONFIG_COMMANDS($1-direct,
[if test -f "src/mca/$1/$1_direct_call"; then
diff "src/mca/$1/$1_direct_call.h" "src/mca/$1/$1_direct_call.h.template" > /dev/null 2>&1
if test "$?" != "0"; then
cp "src/mca/$1/$1_direct_call.h.template" "src/mca/$1/$1_direct_call.h"
echo "config.status: regenerating src/mca/$1/$1_direct_call.h"
else
echo "config.state: src/mca/$1/$1_direct_call.h unchanged"
fi
else
cp "src/mca/$1/$1_direct_call.h.template" "src/mca/$1/$1_direct_call.h"
echo "config.status: creating src/mca/$1/$1_direct_call.h"
fi
rm src/mca/$1/$1_direct_call.h.template])
])

Просмотреть файл

@ -31,7 +31,7 @@
#include "attribute/attribute.h" #include "attribute/attribute.h"
#include "communicator/communicator.h" #include "communicator/communicator.h"
#include "mca/pml/pml.h"
#include "mca/ptl/base/ptl_base_comm.h" #include "mca/ptl/base/ptl_base_comm.h"
@ -185,7 +185,7 @@ int ompi_comm_set ( ompi_communicator_t *newcomm,
} }
/* Initialize the PML stuff in the newcomm */ /* Initialize the PML stuff in the newcomm */
if ( OMPI_ERROR == mca_pml.pml_add_comm(newcomm) ) { if ( OMPI_ERROR == MCA_PML_CALL(add_comm(newcomm)) ) {
OBJ_RELEASE(newcomm); OBJ_RELEASE(newcomm);
return OMPI_ERROR; return OMPI_ERROR;
} }
@ -630,15 +630,15 @@ static int ompi_comm_allgather_emulate_intra( void *inbuf, int incount,
} }
for ( i=0; i<rsize; i++) { for ( i=0; i<rsize; i++) {
rc = mca_pml.pml_irecv ( &tmpbuf[outcount*i], outcount, outtype, i, rc = MCA_PML_CALL(irecv( &tmpbuf[outcount*i], outcount, outtype, i,
OMPI_COMM_ALLGATHER_TAG, comm, &req[i] ); OMPI_COMM_ALLGATHER_TAG, comm, &req[i] ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
} }
} }
rc = mca_pml.pml_isend ( inbuf, incount, intype, 0, OMPI_COMM_ALLGATHER_TAG, rc = MCA_PML_CALL(isend( inbuf, incount, intype, 0, OMPI_COMM_ALLGATHER_TAG,
MCA_PML_BASE_SEND_STANDARD, comm, &sendreq ); MCA_PML_BASE_SEND_STANDARD, comm, &sendreq ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
@ -656,17 +656,17 @@ static int ompi_comm_allgather_emulate_intra( void *inbuf, int incount,
} }
/* Step 2: the inter-bcast step */ /* Step 2: the inter-bcast step */
rc = mca_pml.pml_irecv (outbuf, rsize*outcount, outtype, 0, rc = MCA_PML_CALL(irecv (outbuf, rsize*outcount, outtype, 0,
OMPI_COMM_ALLGATHER_TAG, comm, &sendreq); OMPI_COMM_ALLGATHER_TAG, comm, &sendreq));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
if ( 0 == rank ) { if ( 0 == rank ) {
for ( i=0; i < rsize; i++ ){ for ( i=0; i < rsize; i++ ){
rc = mca_pml.pml_send (tmpbuf, rsize*outcount, outtype, i, rc = MCA_PML_CALL(send (tmpbuf, rsize*outcount, outtype, i,
OMPI_COMM_ALLGATHER_TAG, OMPI_COMM_ALLGATHER_TAG,
MCA_PML_BASE_SEND_STANDARD, comm ); MCA_PML_BASE_SEND_STANDARD, comm));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
@ -781,13 +781,13 @@ ompi_proc_t **ompi_comm_get_rprocs ( ompi_communicator_t *local_comm,
} }
/* send the remote_leader the length of the buffer */ /* send the remote_leader the length of the buffer */
rc = mca_pml.pml_irecv (&rlen, 1, MPI_INT, remote_leader, tag, rc = MCA_PML_CALL(irecv (&rlen, 1, MPI_INT, remote_leader, tag,
bridge_comm, &req ); bridge_comm, &req ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto err_exit; goto err_exit;
} }
rc = mca_pml.pml_send (&len, 1, MPI_INT, remote_leader, tag, rc = MCA_PML_CALL(send (&len, 1, MPI_INT, remote_leader, tag,
MCA_PML_BASE_SEND_STANDARD, bridge_comm ); MCA_PML_BASE_SEND_STANDARD, bridge_comm ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto err_exit; goto err_exit;
} }
@ -823,13 +823,13 @@ ompi_proc_t **ompi_comm_get_rprocs ( ompi_communicator_t *local_comm,
if ( local_rank == local_leader ) { if ( local_rank == local_leader ) {
/* local leader exchange name lists */ /* local leader exchange name lists */
rc = mca_pml.pml_irecv (recvbuf, rlen, MPI_BYTE, remote_leader, tag, rc = MCA_PML_CALL(irecv (recvbuf, rlen, MPI_BYTE, remote_leader, tag,
bridge_comm, &req ); bridge_comm, &req ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto err_exit; goto err_exit;
} }
rc = mca_pml.pml_send (sendbuf, len, MPI_BYTE, remote_leader, tag, rc = MCA_PML_CALL(send(sendbuf, len, MPI_BYTE, remote_leader, tag,
MCA_PML_BASE_SEND_STANDARD, bridge_comm ); MCA_PML_BASE_SEND_STANDARD, bridge_comm ));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto err_exit; goto err_exit;
} }
@ -1326,7 +1326,7 @@ static int ompi_comm_fill_rest (ompi_communicator_t *comm,
comm->c_cube_dim = ompi_cube_dim(comm->c_local_group->grp_proc_count); comm->c_cube_dim = ompi_cube_dim(comm->c_local_group->grp_proc_count);
/* initialize PML stuff on the communicator */ /* initialize PML stuff on the communicator */
if (OMPI_SUCCESS != (ret = mca_pml.pml_add_comm(comm))) { if (OMPI_SUCCESS != (ret = MCA_PML_CALL(add_comm(comm)))) {
/* some error has happened */ /* some error has happened */
return ret; return ret;
} }

Просмотреть файл

@ -437,15 +437,15 @@ static int ompi_comm_allreduce_inter ( int *inbuf, int *outbuf,
/* local leader exchange their data and determine the overall result /* local leader exchange their data and determine the overall result
for both groups */ for both groups */
rc = mca_pml.pml_irecv (outbuf, count, MPI_INT, 0, rc = MCA_PML_CALL(irecv (outbuf, count, MPI_INT, 0,
OMPI_COMM_ALLREDUCE_TAG OMPI_COMM_ALLREDUCE_TAG
, intercomm, &req ); , intercomm, &req));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
rc = mca_pml.pml_send (tmpbuf, count, MPI_INT, 0, rc = MCA_PML_CALL(send (tmpbuf, count, MPI_INT, 0,
OMPI_COMM_ALLREDUCE_TAG, OMPI_COMM_ALLREDUCE_TAG,
MCA_PML_BASE_SEND_STANDARD, intercomm ); MCA_PML_BASE_SEND_STANDARD, intercomm));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
@ -542,15 +542,15 @@ static int ompi_comm_allreduce_intra_bridge (int *inbuf, int *outbuf,
if (local_rank == local_leader ) { if (local_rank == local_leader ) {
MPI_Request req; MPI_Request req;
rc = mca_pml.pml_irecv ( outbuf, count, MPI_INT, remote_leader, rc = MCA_PML_CALL(irecv ( outbuf, count, MPI_INT, remote_leader,
OMPI_COMM_ALLREDUCE_TAG, OMPI_COMM_ALLREDUCE_TAG,
bcomm, &req ); bcomm, &req));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }
rc = mca_pml.pml_send (tmpbuf, count, MPI_INT, remote_leader, rc = MCA_PML_CALL(send (tmpbuf, count, MPI_INT, remote_leader,
OMPI_COMM_ALLREDUCE_TAG, OMPI_COMM_ALLREDUCE_TAG,
MCA_PML_BASE_SEND_STANDARD, bcomm ); MCA_PML_BASE_SEND_STANDARD, bcomm));
if ( OMPI_SUCCESS != rc ) { if ( OMPI_SUCCESS != rc ) {
goto exit; goto exit;
} }

Просмотреть файл

@ -281,7 +281,7 @@ orte_process_name_t *ompi_comm_get_rport (orte_process_name_t *port, int send_fi
} }
if (isnew) { if (isnew) {
mca_pml.pml_add_procs(&rproc, 1); MCA_PML_CALL(add_procs(&rproc, 1));
} }
return rport; return rport;
@ -588,9 +588,9 @@ ompi_comm_disconnect_obj *ompi_comm_disconnect_init ( ompi_communicator_t *comm)
/* initiate all isend_irecvs. We use a dummy buffer stored on /* initiate all isend_irecvs. We use a dummy buffer stored on
the object, since we are sending zero size messages anyway. */ the object, since we are sending zero size messages anyway. */
for ( i=0; i < obj->size; i++ ) { for ( i=0; i < obj->size; i++ ) {
ret = mca_pml.pml_irecv (&(obj->buf), 0, MPI_INT, i, ret = MCA_PML_CALL(irecv (&(obj->buf), 0, MPI_INT, i,
OMPI_COMM_BARRIER_TAG, comm, OMPI_COMM_BARRIER_TAG, comm,
&(obj->reqs[2*i])); &(obj->reqs[2*i])));
if ( OMPI_SUCCESS != ret ) { if ( OMPI_SUCCESS != ret ) {
free (obj->reqs); free (obj->reqs);
@ -598,10 +598,10 @@ ompi_comm_disconnect_obj *ompi_comm_disconnect_init ( ompi_communicator_t *comm)
return NULL; return NULL;
} }
ret = mca_pml.pml_isend (&(obj->buf), 0, MPI_INT, i, ret = MCA_PML_CALL(isend (&(obj->buf), 0, MPI_INT, i,
OMPI_COMM_BARRIER_TAG, OMPI_COMM_BARRIER_TAG,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &(obj->reqs[2*i+1])); comm, &(obj->reqs[2*i+1])));
if ( OMPI_SUCCESS != ret ) { if ( OMPI_SUCCESS != ret ) {
free (obj->reqs); free (obj->reqs);

Просмотреть файл

@ -84,7 +84,7 @@ int ompi_comm_init(void)
ompi_mpi_comm_world.c_cube_dim = ompi_cube_dim(size); ompi_mpi_comm_world.c_cube_dim = ompi_cube_dim(size);
ompi_mpi_comm_world.error_handler = &ompi_mpi_errors_are_fatal; ompi_mpi_comm_world.error_handler = &ompi_mpi_errors_are_fatal;
OBJ_RETAIN( &ompi_mpi_errors_are_fatal ); OBJ_RETAIN( &ompi_mpi_errors_are_fatal );
mca_pml.pml_add_comm(&ompi_mpi_comm_world); MCA_PML_CALL(add_comm(&ompi_mpi_comm_world));
OMPI_COMM_SET_PML_ADDED(&ompi_mpi_comm_world); OMPI_COMM_SET_PML_ADDED(&ompi_mpi_comm_world);
ompi_pointer_array_set_item (&ompi_mpi_communicators, 0, &ompi_mpi_comm_world); ompi_pointer_array_set_item (&ompi_mpi_communicators, 0, &ompi_mpi_comm_world);
@ -114,7 +114,7 @@ int ompi_comm_init(void)
ompi_mpi_comm_self.c_remote_group = group; ompi_mpi_comm_self.c_remote_group = group;
ompi_mpi_comm_self.error_handler = &ompi_mpi_errors_are_fatal; ompi_mpi_comm_self.error_handler = &ompi_mpi_errors_are_fatal;
OBJ_RETAIN( &ompi_mpi_errors_are_fatal ); OBJ_RETAIN( &ompi_mpi_errors_are_fatal );
mca_pml.pml_add_comm(&ompi_mpi_comm_self); MCA_PML_CALL(add_comm(&ompi_mpi_comm_self));
OMPI_COMM_SET_PML_ADDED(&ompi_mpi_comm_self); OMPI_COMM_SET_PML_ADDED(&ompi_mpi_comm_self);
ompi_pointer_array_set_item (&ompi_mpi_communicators, 1, &ompi_mpi_comm_self); ompi_pointer_array_set_item (&ompi_mpi_communicators, 1, &ompi_mpi_comm_self);
@ -330,7 +330,7 @@ static void ompi_comm_destruct(ompi_communicator_t* comm)
comm->c_topo_component = NULL; comm->c_topo_component = NULL;
/* Tell the PML that this communicator is done. /* Tell the PML that this communicator is done.
mca_pml.pml_add_comm() was called explicitly in MCA_PML_CALL(add_comm()) was called explicitly in
ompi_comm_init() when setting up COMM_WORLD and COMM_SELF; it's ompi_comm_init() when setting up COMM_WORLD and COMM_SELF; it's
called in ompi_comm_set() for all others. This means that all called in ompi_comm_set() for all others. This means that all
communicators must be destroyed before the PML shuts down. communicators must be destroyed before the PML shuts down.
@ -345,7 +345,7 @@ static void ompi_comm_destruct(ompi_communicator_t* comm)
never pml_add_com'ed. */ never pml_add_com'ed. */
if ( MPI_COMM_NULL != comm && OMPI_COMM_IS_PML_ADDED(comm) ) { if ( MPI_COMM_NULL != comm && OMPI_COMM_IS_PML_ADDED(comm) ) {
mca_pml.pml_del_comm (comm); MCA_PML_CALL(del_comm (comm));
} }

Просмотреть файл

@ -48,7 +48,7 @@ int mca_allocator_base_open(void)
return mca_base_components_open("allocator", 0, return mca_base_components_open("allocator", 0,
mca_allocator_base_static_components, mca_allocator_base_static_components,
&mca_allocator_base_components); &mca_allocator_base_components, true);
} }
/** /**

Просмотреть файл

@ -118,7 +118,8 @@ OMPI_DECLSPEC int mca_base_component_compare(const mca_base_component_t *a,
OMPI_DECLSPEC int mca_base_component_find(const char *directory, const char *type, OMPI_DECLSPEC int mca_base_component_find(const char *directory, const char *type,
const mca_base_component_t *static_components[], const mca_base_component_t *static_components[],
ompi_list_t *found_components); ompi_list_t *found_components,
bool open_dso_components);
/* mca_base_component_register.c */ /* mca_base_component_register.c */
@ -137,7 +138,8 @@ OMPI_DECLSPEC void mca_base_component_repository_finalize(void);
OMPI_DECLSPEC int mca_base_components_open(const char *type_name, int output_id, OMPI_DECLSPEC int mca_base_components_open(const char *type_name, int output_id,
const mca_base_component_t **static_components, const mca_base_component_t **static_components,
ompi_list_t *components_available); ompi_list_t *components_available,
bool open_dso_components);
/* mca_base_components_close.c */ /* mca_base_components_close.c */

Просмотреть файл

@ -109,7 +109,8 @@ static ompi_list_t found_files;
*/ */
int mca_base_component_find(const char *directory, const char *type, int mca_base_component_find(const char *directory, const char *type,
const mca_base_component_t *static_components[], const mca_base_component_t *static_components[],
ompi_list_t *found_components) ompi_list_t *found_components,
bool open_dso_components)
{ {
int i; int i;
mca_base_component_list_item_t *cli; mca_base_component_list_item_t *cli;
@ -127,8 +128,13 @@ int mca_base_component_find(const char *directory, const char *type,
} }
/* Find any available dynamic components in the specified directory */ /* Find any available dynamic components in the specified directory */
if (open_dso_components) {
find_dyn_components(directory, type, NULL, found_components); find_dyn_components(directory, type, NULL, found_components);
} else {
ompi_output_verbose(40, 0,
"mca: base: component_find: dso loading for %s MCA components disabled",
type);
}
/* All done */ /* All done */

Просмотреть файл

@ -58,7 +58,8 @@ static int parse_requested(int mca_param, char ***requested_component_names);
*/ */
int mca_base_components_open(const char *type_name, int output_id, int mca_base_components_open(const char *type_name, int output_id,
const mca_base_component_t **static_components, const mca_base_component_t **static_components,
ompi_list_t *components_available) ompi_list_t *components_available,
bool open_dso_components)
{ {
int ret, param; int ret, param;
ompi_list_item_t *item; ompi_list_item_t *item;
@ -92,7 +93,7 @@ int mca_base_components_open(const char *type_name, int output_id,
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_component_find(NULL, type_name, static_components, mca_base_component_find(NULL, type_name, static_components,
&components_found)) { &components_found, open_dso_components)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -375,7 +375,7 @@ ompi_output(0, "[%d,%d,%d] mca_base_modex_registry_callback: %s-%s-%d-%d receive
} /* if value[i]->cnt > 0 */ } /* if value[i]->cnt > 0 */
if(NULL != new_procs) { if(NULL != new_procs) {
mca_pml.pml_add_procs(new_procs, new_proc_count); MCA_PML_CALL(add_procs(new_procs, new_proc_count));
free(new_procs); free(new_procs);
} }
} }

Просмотреть файл

@ -68,7 +68,7 @@ int mca_coll_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("coll", mca_coll_base_output, mca_base_components_open("coll", mca_coll_base_output,
mca_coll_base_static_components, mca_coll_base_static_components,
&mca_coll_base_components_opened)) { &mca_coll_base_components_opened, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }
mca_coll_base_components_opened_valid = true; mca_coll_base_components_opened_valid = true;

Просмотреть файл

@ -21,6 +21,7 @@
#include "include/constants.h" #include "include/constants.h"
#include "communicator/communicator.h" #include "communicator/communicator.h"
#include "mca/coll/coll.h" #include "mca/coll/coll.h"
#include "mca/pml/pml.h"
#include "mca/coll/base/coll_tags.h" #include "mca/coll/base/coll_tags.h"
#include "coll_basic.h" #include "coll_basic.h"
@ -92,9 +93,9 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
/* Step one: gather operations: */ /* Step one: gather operations: */
if ( rank != root ) { if ( rank != root ) {
/* send your data to root */ /* send your data to root */
err = mca_pml.pml_send(sbuf, scount, sdtype, root, err = MCA_PML_CALL(send(sbuf, scount, sdtype, root,
MCA_COLL_BASE_TAG_ALLGATHER, MCA_COLL_BASE_TAG_ALLGATHER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
return err; return err;
} }
@ -111,17 +112,17 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
} }
/* Do a send-recv between the two root procs. to avoid deadlock */ /* Do a send-recv between the two root procs. to avoid deadlock */
err = mca_pml.pml_isend (sbuf, scount, sdtype, 0, err = MCA_PML_CALL(isend (sbuf, scount, sdtype, 0,
MCA_COLL_BASE_TAG_ALLGATHER, MCA_COLL_BASE_TAG_ALLGATHER,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &reqs[rsize] ); comm, &reqs[rsize] ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
return err; return err;
} }
err = mca_pml.pml_irecv(rbuf, rcount, rdtype, 0, err = MCA_PML_CALL(irecv(rbuf, rcount, rdtype, 0,
MCA_COLL_BASE_TAG_ALLGATHER, comm, MCA_COLL_BASE_TAG_ALLGATHER, comm,
&reqs[0]); &reqs[0]));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }
@ -129,9 +130,9 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
incr = rextent * rcount; incr = rextent * rcount;
ptmp = (char *) rbuf + incr; ptmp = (char *) rbuf + incr;
for (i = 1; i < rsize; ++i, ptmp += incr) { for (i = 1; i < rsize; ++i, ptmp += incr) {
err = mca_pml.pml_irecv(ptmp, rcount, rdtype, i, err = MCA_PML_CALL(irecv(ptmp, rcount, rdtype, i,
MCA_COLL_BASE_TAG_ALLGATHER, MCA_COLL_BASE_TAG_ALLGATHER,
comm, &reqs[i]); comm, &reqs[i]));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -148,17 +149,17 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
return err; return err;
} }
err = mca_pml.pml_isend (rbuf, rsize*rcount, rdtype, 0, err = MCA_PML_CALL(isend (rbuf, rsize*rcount, rdtype, 0,
MCA_COLL_BASE_TAG_ALLGATHER, MCA_COLL_BASE_TAG_ALLGATHER,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &req ); comm, &req ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
err = mca_pml.pml_recv(tmpbuf, size *scount, sdtype, 0, err = MCA_PML_CALL(recv(tmpbuf, size *scount, sdtype, 0,
MCA_COLL_BASE_TAG_ALLGATHER, comm, MCA_COLL_BASE_TAG_ALLGATHER, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
goto exit; goto exit;
} }
@ -176,9 +177,9 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
*/ */
if ( rank != root ) { if ( rank != root ) {
/* post the recv */ /* post the recv */
err = mca_pml.pml_recv (rbuf, size*rcount, rdtype, 0, err = MCA_PML_CALL(recv (rbuf, size*rcount, rdtype, 0,
MCA_COLL_BASE_TAG_ALLGATHER, comm, MCA_COLL_BASE_TAG_ALLGATHER, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -187,10 +188,10 @@ int mca_coll_basic_allgather_inter(void *sbuf, int scount,
/* Send the data to every other process in the remote group /* Send the data to every other process in the remote group
except to rank zero. which has it already. */ except to rank zero. which has it already. */
for ( i=1; i<rsize; i++ ) { for ( i=1; i<rsize; i++ ) {
err = mca_pml.pml_isend(tmpbuf, size*scount, sdtype, i, err = MCA_PML_CALL(isend(tmpbuf, size*scount, sdtype, i,
MCA_COLL_BASE_TAG_ALLGATHER, MCA_COLL_BASE_TAG_ALLGATHER,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &reqs[i-1] ); comm, &reqs[i-1] ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }

Просмотреть файл

@ -25,6 +25,7 @@
#include "mca/coll/coll.h" #include "mca/coll/coll.h"
#include "mca/coll/base/coll_tags.h" #include "mca/coll/base/coll_tags.h"
#include "coll_basic.h" #include "coll_basic.h"
#include "mca/pml/pml.h"
/* /*
@ -98,17 +99,17 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
pml_buffer = tmpbuf - lb; pml_buffer = tmpbuf - lb;
/* Do a send-recv between the two root procs. to avoid deadlock */ /* Do a send-recv between the two root procs. to avoid deadlock */
err = mca_pml.pml_irecv(rbuf, count, dtype, 0, err = MCA_PML_CALL(irecv(rbuf, count, dtype, 0,
MCA_COLL_BASE_TAG_ALLREDUCE, comm, MCA_COLL_BASE_TAG_ALLREDUCE, comm,
&(req[0])); &(req[0])));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
goto exit; goto exit;
} }
err = mca_pml.pml_isend (sbuf, count, dtype, 0, err = MCA_PML_CALL(isend (sbuf, count, dtype, 0,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &(req[1]) ); comm, &(req[1]) ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -121,9 +122,9 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
/* Loop receiving and calling reduction function (C or Fortran). */ /* Loop receiving and calling reduction function (C or Fortran). */
for (i = 1; i < rsize; i++) { for (i = 1; i < rsize; i++) {
err = mca_pml.pml_recv(pml_buffer, count, dtype, i, err = MCA_PML_CALL(recv(pml_buffer, count, dtype, i,
MCA_COLL_BASE_TAG_ALLREDUCE, comm, MCA_COLL_BASE_TAG_ALLREDUCE, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
goto exit; goto exit;
} }
@ -134,9 +135,9 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
} }
else { else {
/* If not root, send data to the root. */ /* If not root, send data to the root. */
err = mca_pml.pml_send(sbuf, count, dtype, root, err = MCA_PML_CALL(send(sbuf, count, dtype, root,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -150,17 +151,17 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
/***************************************************************************/ /***************************************************************************/
if ( rank == root ) { if ( rank == root ) {
/* sendrecv between the two roots */ /* sendrecv between the two roots */
err = mca_pml.pml_irecv (pml_buffer, count, dtype, 0, err = MCA_PML_CALL(irecv (pml_buffer, count, dtype, 0,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
comm, &(req[1])); comm, &(req[1])));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
err = mca_pml.pml_isend (rbuf, count, dtype, 0, err = MCA_PML_CALL(isend (rbuf, count, dtype, 0,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
MCA_PML_BASE_SEND_STANDARD, comm, MCA_PML_BASE_SEND_STANDARD, comm,
&(req[0])); &(req[0])));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -176,10 +177,10 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
*/ */
if (rsize > 1) { if (rsize > 1) {
for ( i=1; i<rsize; i++ ) { for ( i=1; i<rsize; i++ ) {
err = mca_pml.pml_isend (pml_buffer, count, dtype,i, err = MCA_PML_CALL(isend (pml_buffer, count, dtype,i,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
MCA_PML_BASE_SEND_STANDARD, comm, MCA_PML_BASE_SEND_STANDARD, comm,
&reqs[i - 1]); &reqs[i - 1]));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -192,9 +193,9 @@ int mca_coll_basic_allreduce_inter(void *sbuf, void *rbuf, int count,
} }
} }
else { else {
err = mca_pml.pml_recv (rbuf, count, dtype, root, err = MCA_PML_CALL(recv (rbuf, count, dtype, root,
MCA_COLL_BASE_TAG_ALLREDUCE, MCA_COLL_BASE_TAG_ALLREDUCE,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
exit: exit:

Просмотреть файл

@ -101,8 +101,8 @@ int mca_coll_basic_alltoall_intra(void *sbuf, int scount,
for (i = (rank + 1) % size; i != rank; for (i = (rank + 1) % size; i != rank;
i = (i + 1) % size, ++rreq) { i = (i + 1) % size, ++rreq) {
err = mca_pml.pml_irecv_init(prcv + (i * rcvinc), rcount, rdtype, i, err = MCA_PML_CALL(irecv_init(prcv + (i * rcvinc), rcount, rdtype, i,
MCA_COLL_BASE_TAG_ALLTOALL, comm, rreq); MCA_COLL_BASE_TAG_ALLTOALL, comm, rreq));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(req, rreq - req); mca_coll_basic_free_reqs(req, rreq - req);
return err; return err;
@ -113,9 +113,9 @@ int mca_coll_basic_alltoall_intra(void *sbuf, int scount,
for (i = (rank + 1) % size; i != rank; for (i = (rank + 1) % size; i != rank;
i = (i + 1) % size, ++sreq) { i = (i + 1) % size, ++sreq) {
err = mca_pml.pml_isend_init(psnd + (i * sndinc), scount, sdtype, i, err = MCA_PML_CALL(isend_init(psnd + (i * sndinc), scount, sdtype, i,
MCA_COLL_BASE_TAG_ALLTOALL, MCA_COLL_BASE_TAG_ALLTOALL,
MCA_PML_BASE_SEND_STANDARD, comm, sreq); MCA_PML_BASE_SEND_STANDARD, comm, sreq));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(req, sreq - req); mca_coll_basic_free_reqs(req, sreq - req);
return err; return err;
@ -124,7 +124,7 @@ int mca_coll_basic_alltoall_intra(void *sbuf, int scount,
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(nreqs, req); MCA_PML_CALL(start(nreqs, req));
/* Wait for them all. If there's an error, note that we don't /* Wait for them all. If there's an error, note that we don't
care what the error was -- just that there *was* an error. The care what the error was -- just that there *was* an error. The
@ -200,8 +200,8 @@ int mca_coll_basic_alltoall_inter(void *sbuf, int scount,
/* Post all receives first */ /* Post all receives first */
for (i = 0; i < size; i++, ++rreq) { for (i = 0; i < size; i++, ++rreq) {
err = mca_pml.pml_irecv(prcv + (i * rcvinc), rcount, rdtype, i, err = MCA_PML_CALL(irecv(prcv + (i * rcvinc), rcount, rdtype, i,
MCA_COLL_BASE_TAG_ALLTOALL, comm, rreq); MCA_COLL_BASE_TAG_ALLTOALL, comm, rreq));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }
@ -209,9 +209,9 @@ int mca_coll_basic_alltoall_inter(void *sbuf, int scount,
/* Now post all sends */ /* Now post all sends */
for (i = 0; i < size; i++, ++sreq) { for (i = 0; i < size; i++, ++sreq) {
err = mca_pml.pml_isend(psnd + (i * sndinc), scount, sdtype, i, err = MCA_PML_CALL(isend(psnd + (i * sndinc), scount, sdtype, i,
MCA_COLL_BASE_TAG_ALLTOALL, MCA_COLL_BASE_TAG_ALLTOALL,
MCA_PML_BASE_SEND_STANDARD, comm, sreq); MCA_PML_BASE_SEND_STANDARD, comm, sreq));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -91,8 +91,8 @@ mca_coll_basic_alltoallv_intra(void *sbuf, int *scounts, int *sdisps,
} }
prcv = ((char *) rbuf) + (rdisps[i] * rcvextent); prcv = ((char *) rbuf) + (rdisps[i] * rcvextent);
err = mca_pml.pml_irecv_init(prcv, rcounts[i], rdtype, err = MCA_PML_CALL(irecv_init(prcv, rcounts[i], rdtype,
i, MCA_COLL_BASE_TAG_ALLTOALLV, comm, preq++); i, MCA_COLL_BASE_TAG_ALLTOALLV, comm, preq++));
++nreqs; ++nreqs;
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
@ -108,9 +108,9 @@ mca_coll_basic_alltoallv_intra(void *sbuf, int *scounts, int *sdisps,
} }
psnd = ((char *) sbuf) + (sdisps[i] * sndextent); psnd = ((char *) sbuf) + (sdisps[i] * sndextent);
err = mca_pml.pml_isend_init(psnd, scounts[i], sdtype, err = MCA_PML_CALL(isend_init(psnd, scounts[i], sdtype,
i, MCA_COLL_BASE_TAG_ALLTOALLV, i, MCA_COLL_BASE_TAG_ALLTOALLV,
MCA_PML_BASE_SEND_STANDARD, comm, preq++); MCA_PML_BASE_SEND_STANDARD, comm, preq++));
++nreqs; ++nreqs;
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
@ -120,7 +120,7 @@ mca_coll_basic_alltoallv_intra(void *sbuf, int *scounts, int *sdisps,
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(nreqs, comm->c_coll_basic_data->mccb_reqs); MCA_PML_CALL(start(nreqs, comm->c_coll_basic_data->mccb_reqs));
/* Wait for them all. If there's an error, note that we don't care /* Wait for them all. If there's an error, note that we don't care
what the error was -- just that there *was* an error. The PML what the error was -- just that there *was* an error. The PML
@ -184,8 +184,8 @@ mca_coll_basic_alltoallv_inter(void *sbuf, int *scounts, int *sdisps,
for (i = 0; i < rsize; ++i) { for (i = 0; i < rsize; ++i) {
prcv = ((char *) rbuf) + (rdisps[i] * rcvextent); prcv = ((char *) rbuf) + (rdisps[i] * rcvextent);
if ( rcounts[i] > 0 ){ if ( rcounts[i] > 0 ){
err = mca_pml.pml_irecv(prcv, rcounts[i], rdtype, err = MCA_PML_CALL(irecv(prcv, rcounts[i], rdtype,
i, MCA_COLL_BASE_TAG_ALLTOALLV, comm, &preq[i]); i, MCA_COLL_BASE_TAG_ALLTOALLV, comm, &preq[i]));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -199,9 +199,9 @@ mca_coll_basic_alltoallv_inter(void *sbuf, int *scounts, int *sdisps,
for (i = 0; i < rsize; ++i) { for (i = 0; i < rsize; ++i) {
psnd = ((char *) sbuf) + (sdisps[i] * sndextent); psnd = ((char *) sbuf) + (sdisps[i] * sndextent);
if ( scounts[i] > 0 ) { if ( scounts[i] > 0 ) {
err = mca_pml.pml_isend(psnd, scounts[i], sdtype, err = MCA_PML_CALL(isend(psnd, scounts[i], sdtype,
i, MCA_COLL_BASE_TAG_ALLTOALLV, i, MCA_COLL_BASE_TAG_ALLTOALLV,
MCA_PML_BASE_SEND_STANDARD, comm, &preq[rsize+i]); MCA_PML_BASE_SEND_STANDARD, comm, &preq[rsize+i]));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -83,8 +83,8 @@ int mca_coll_basic_alltoallw_intra(void *sbuf, int *scounts, int *sdisps,
continue; continue;
prcv = ((char *) rbuf) + rdisps[i]; prcv = ((char *) rbuf) + rdisps[i];
err = mca_pml.pml_irecv_init(prcv, rcounts[i], rdtypes[i], err = MCA_PML_CALL(irecv_init(prcv, rcounts[i], rdtypes[i],
i, MCA_COLL_BASE_TAG_ALLTOALLW, comm, preq++); i, MCA_COLL_BASE_TAG_ALLTOALLW, comm, preq++));
++nreqs; ++nreqs;
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
@ -99,9 +99,9 @@ int mca_coll_basic_alltoallw_intra(void *sbuf, int *scounts, int *sdisps,
continue; continue;
psnd = ((char *) sbuf) + sdisps[i]; psnd = ((char *) sbuf) + sdisps[i];
err = mca_pml.pml_isend_init(psnd, scounts[i], sdtypes[i], err = MCA_PML_CALL(isend_init(psnd, scounts[i], sdtypes[i],
i, MCA_COLL_BASE_TAG_ALLTOALLW, i, MCA_COLL_BASE_TAG_ALLTOALLW,
MCA_PML_BASE_SEND_STANDARD, comm, preq++); MCA_PML_BASE_SEND_STANDARD, comm, preq++));
++nreqs; ++nreqs;
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
@ -111,7 +111,7 @@ int mca_coll_basic_alltoallw_intra(void *sbuf, int *scounts, int *sdisps,
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(nreqs, comm->c_coll_basic_data->mccb_reqs); MCA_PML_CALL(start(nreqs, comm->c_coll_basic_data->mccb_reqs));
/* Wait for them all. If there's an error, note that we don't care /* Wait for them all. If there's an error, note that we don't care
what the error was -- just that there *was* an error. The PML what the error was -- just that there *was* an error. The PML
@ -166,9 +166,9 @@ int mca_coll_basic_alltoallw_inter(void *sbuf, int *scounts, int *sdisps,
/* Post all receives first -- a simple optimization */ /* Post all receives first -- a simple optimization */
for (i = 0; i < size; ++i) { for (i = 0; i < size; ++i) {
prcv = ((char *) rbuf) + rdisps[i]; prcv = ((char *) rbuf) + rdisps[i];
err = mca_pml.pml_irecv_init(prcv, rcounts[i], rdtypes[i], err = MCA_PML_CALL(irecv_init(prcv, rcounts[i], rdtypes[i],
i, MCA_COLL_BASE_TAG_ALLTOALLW, i, MCA_COLL_BASE_TAG_ALLTOALLW,
comm, preq++); comm, preq++));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
return err; return err;
@ -178,9 +178,9 @@ int mca_coll_basic_alltoallw_inter(void *sbuf, int *scounts, int *sdisps,
/* Now post all sends */ /* Now post all sends */
for (i = 0; i < size; ++i) { for (i = 0; i < size; ++i) {
psnd = ((char *) sbuf) + sdisps[i]; psnd = ((char *) sbuf) + sdisps[i];
err = mca_pml.pml_isend_init(psnd, scounts[i], sdtypes[i], err = MCA_PML_CALL(isend_init(psnd, scounts[i], sdtypes[i],
i, MCA_COLL_BASE_TAG_ALLTOALLW, i, MCA_COLL_BASE_TAG_ALLTOALLW,
MCA_PML_BASE_SEND_STANDARD, comm, preq++); MCA_PML_BASE_SEND_STANDARD, comm, preq++));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs); mca_coll_basic_free_reqs(comm->c_coll_basic_data->mccb_reqs, nreqs);
return err; return err;
@ -188,7 +188,7 @@ int mca_coll_basic_alltoallw_inter(void *sbuf, int *scounts, int *sdisps,
} }
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(nreqs, comm->c_coll_basic_data->mccb_reqs); MCA_PML_CALL(start(nreqs, comm->c_coll_basic_data->mccb_reqs));
/* Wait for them all. If there's an error, note that we don't care /* Wait for them all. If there's an error, note that we don't care
what the error was -- just that there *was* an error. The PML what the error was -- just that there *was* an error. The PML

Просмотреть файл

@ -44,14 +44,14 @@ int mca_coll_basic_barrier_intra_lin(struct ompi_communicator_t *comm)
/* All non-root send & receive zero-length message. */ /* All non-root send & receive zero-length message. */
if (rank > 0) { if (rank > 0) {
err = mca_pml.pml_send(NULL, 0, MPI_BYTE, 0, MCA_COLL_BASE_TAG_BARRIER, err = MCA_PML_CALL(send(NULL, 0, MPI_BYTE, 0, MCA_COLL_BASE_TAG_BARRIER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
err = mca_pml.pml_recv(NULL, 0, MPI_BYTE, 0, MCA_COLL_BASE_TAG_BARRIER, err = MCA_PML_CALL(recv(NULL, 0, MPI_BYTE, 0, MCA_COLL_BASE_TAG_BARRIER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -61,17 +61,17 @@ int mca_coll_basic_barrier_intra_lin(struct ompi_communicator_t *comm)
else { else {
for (i = 1; i < size; ++i) { for (i = 1; i < size; ++i) {
err = mca_pml.pml_recv(NULL, 0, MPI_BYTE, MPI_ANY_SOURCE, err = MCA_PML_CALL(recv(NULL, 0, MPI_BYTE, MPI_ANY_SOURCE,
MCA_COLL_BASE_TAG_BARRIER, MCA_COLL_BASE_TAG_BARRIER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
} }
for (i = 1; i < size; ++i) { for (i = 1; i < size; ++i) {
err = mca_pml.pml_send(NULL, 0, MPI_BYTE, i, MCA_COLL_BASE_TAG_BARRIER, err = MCA_PML_CALL(send(NULL, 0, MPI_BYTE, i, MCA_COLL_BASE_TAG_BARRIER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -114,9 +114,9 @@ int mca_coll_basic_barrier_intra_log(struct ompi_communicator_t *comm)
for (i = dim, mask = 1 << i; i > hibit; --i, mask >>= 1) { for (i = dim, mask = 1 << i; i > hibit; --i, mask >>= 1) {
peer = rank | mask; peer = rank | mask;
if (peer < size) { if (peer < size) {
err = mca_pml.pml_recv(NULL, 0, MPI_BYTE, peer, err = MCA_PML_CALL(recv(NULL, 0, MPI_BYTE, peer,
MCA_COLL_BASE_TAG_BARRIER, MCA_COLL_BASE_TAG_BARRIER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -127,15 +127,15 @@ int mca_coll_basic_barrier_intra_log(struct ompi_communicator_t *comm)
if (rank > 0) { if (rank > 0) {
peer = rank & ~(1 << hibit); peer = rank & ~(1 << hibit);
err = mca_pml.pml_send(NULL, 0, MPI_BYTE, peer, MCA_COLL_BASE_TAG_BARRIER, err = MCA_PML_CALL(send(NULL, 0, MPI_BYTE, peer, MCA_COLL_BASE_TAG_BARRIER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
err = mca_pml.pml_recv(NULL, 0, MPI_BYTE, peer, err = MCA_PML_CALL(recv(NULL, 0, MPI_BYTE, peer,
MCA_COLL_BASE_TAG_BARRIER, MCA_COLL_BASE_TAG_BARRIER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
/* Send to children. */ /* Send to children. */
@ -143,9 +143,9 @@ int mca_coll_basic_barrier_intra_log(struct ompi_communicator_t *comm)
for (i = hibit + 1, mask = 1 << i; i <= dim; ++i, mask <<= 1) { for (i = hibit + 1, mask = 1 << i; i <= dim; ++i, mask <<= 1) {
peer = rank | mask; peer = rank | mask;
if (peer < size) { if (peer < size) {
err = mca_pml.pml_send(NULL, 0, MPI_BYTE, peer, err = MCA_PML_CALL(send(NULL, 0, MPI_BYTE, peer,
MCA_COLL_BASE_TAG_BARRIER, MCA_COLL_BASE_TAG_BARRIER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -52,9 +52,9 @@ int mca_coll_basic_bcast_lin_intra(void *buff, int count,
/* Non-root receive the data. */ /* Non-root receive the data. */
if (rank != root) { if (rank != root) {
return mca_pml.pml_recv(buff, count, datatype, root, return MCA_PML_CALL(recv(buff, count, datatype, root,
MCA_COLL_BASE_TAG_BCAST, comm, MCA_COLL_BASE_TAG_BCAST, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
} }
/* Root sends data to all others. */ /* Root sends data to all others. */
@ -64,10 +64,10 @@ int mca_coll_basic_bcast_lin_intra(void *buff, int count,
continue; continue;
} }
err = mca_pml.pml_isend_init(buff, count, datatype, i, err = MCA_PML_CALL(isend_init(buff, count, datatype, i,
MCA_COLL_BASE_TAG_BCAST, MCA_COLL_BASE_TAG_BCAST,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, preq++); comm, preq++));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -76,7 +76,7 @@ int mca_coll_basic_bcast_lin_intra(void *buff, int count,
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(i, reqs); MCA_PML_CALL(start(i, reqs));
/* Wait for them all. If there's an error, note that we don't /* Wait for them all. If there's an error, note that we don't
care what the error was -- just that there *was* an error. The care what the error was -- just that there *was* an error. The
@ -134,9 +134,9 @@ int mca_coll_basic_bcast_log_intra(void *buff, int count,
if (vrank > 0) { if (vrank > 0) {
peer = ((vrank & ~(1 << hibit)) + root) % size; peer = ((vrank & ~(1 << hibit)) + root) % size;
err = mca_pml.pml_recv(buff, count, datatype, peer, err = MCA_PML_CALL(recv(buff, count, datatype, peer,
MCA_COLL_BASE_TAG_BCAST, MCA_COLL_BASE_TAG_BCAST,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }
@ -153,10 +153,10 @@ int mca_coll_basic_bcast_log_intra(void *buff, int count,
peer = (peer + root) % size; peer = (peer + root) % size;
++nreqs; ++nreqs;
err = mca_pml.pml_isend_init(buff, count, datatype, peer, err = MCA_PML_CALL(isend_init(buff, count, datatype, peer,
MCA_COLL_BASE_TAG_BCAST, MCA_COLL_BASE_TAG_BCAST,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, preq++); comm, preq++));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
mca_coll_basic_free_reqs(reqs, preq - reqs); mca_coll_basic_free_reqs(reqs, preq - reqs);
return err; return err;
@ -170,7 +170,7 @@ int mca_coll_basic_bcast_log_intra(void *buff, int count,
/* Start your engines. This will never return an error. */ /* Start your engines. This will never return an error. */
mca_pml.pml_start(nreqs, reqs); MCA_PML_CALL(start(nreqs, reqs));
/* Wait for them all. If there's an error, note that we don't /* Wait for them all. If there's an error, note that we don't
care what the error was -- just that there *was* an error. care what the error was -- just that there *was* an error.
@ -218,17 +218,17 @@ int mca_coll_basic_bcast_lin_inter(void *buff, int count,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* Non-root receive the data. */ /* Non-root receive the data. */
err = mca_pml.pml_recv(buff, count, datatype, root, err = MCA_PML_CALL(recv(buff, count, datatype, root,
MCA_COLL_BASE_TAG_BCAST, comm, MCA_COLL_BASE_TAG_BCAST, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
} }
else { else {
/* root section */ /* root section */
for (i = 0; i < rsize; i++) { for (i = 0; i < rsize; i++) {
err = mca_pml.pml_isend(buff, count, datatype, i, err = MCA_PML_CALL(isend(buff, count, datatype, i,
MCA_COLL_BASE_TAG_BCAST, MCA_COLL_BASE_TAG_BCAST,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &(reqs[i])); comm, &(reqs[i])));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -58,17 +58,17 @@ int mca_coll_basic_exscan_intra(void *sbuf, void *rbuf, int count,
/* If we're rank 0, then we send our sbuf to the next rank */ /* If we're rank 0, then we send our sbuf to the next rank */
if (0 == rank) { if (0 == rank) {
return mca_pml.pml_send(sbuf, count, dtype, rank + 1, return MCA_PML_CALL(send(sbuf, count, dtype, rank + 1,
MCA_COLL_BASE_TAG_EXSCAN, MCA_COLL_BASE_TAG_EXSCAN,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
/* If we're the last rank, then just receive the result from the /* If we're the last rank, then just receive the result from the
prior rank */ prior rank */
else if ((size - 1) == rank) { else if ((size - 1) == rank) {
return mca_pml.pml_recv(rbuf, count, dtype, rank - 1, return MCA_PML_CALL(recv(rbuf, count, dtype, rank - 1,
MCA_COLL_BASE_TAG_EXSCAN, comm, MPI_STATUS_IGNORE); MCA_COLL_BASE_TAG_EXSCAN, comm, MPI_STATUS_IGNORE));
} }
/* Otherwise, get the result from the prior rank, combine it with my /* Otherwise, get the result from the prior rank, combine it with my
@ -76,8 +76,8 @@ int mca_coll_basic_exscan_intra(void *sbuf, void *rbuf, int count,
/* Start the receive for the prior rank's answer */ /* Start the receive for the prior rank's answer */
err = mca_pml.pml_irecv(rbuf, count, dtype, rank - 1, err = MCA_PML_CALL(irecv(rbuf, count, dtype, rank - 1,
MCA_COLL_BASE_TAG_EXSCAN, comm, &req); MCA_COLL_BASE_TAG_EXSCAN, comm, &req));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
goto error; goto error;
} }
@ -142,9 +142,9 @@ int mca_coll_basic_exscan_intra(void *sbuf, void *rbuf, int count,
/* Send my result off to the next rank */ /* Send my result off to the next rank */
err = mca_pml.pml_send(reduce_buffer, count, dtype, rank + 1, err = MCA_PML_CALL(send(reduce_buffer, count, dtype, rank + 1,
MCA_COLL_BASE_TAG_EXSCAN, MCA_COLL_BASE_TAG_EXSCAN,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
/* Error */ /* Error */

Просмотреть файл

@ -53,9 +53,9 @@ int mca_coll_basic_gather_intra(void *sbuf, int scount,
/* Everyone but root sends data and returns. */ /* Everyone but root sends data and returns. */
if (rank != root) { if (rank != root) {
return mca_pml.pml_send(sbuf, scount, sdtype, root, return MCA_PML_CALL(send(sbuf, scount, sdtype, root,
MCA_COLL_BASE_TAG_GATHER, MCA_COLL_BASE_TAG_GATHER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
/* I am the root, loop receiving the data. */ /* I am the root, loop receiving the data. */
@ -73,9 +73,9 @@ int mca_coll_basic_gather_intra(void *sbuf, int scount,
err = ompi_ddt_sndrcv(sbuf, scount, sdtype, ptmp, err = ompi_ddt_sndrcv(sbuf, scount, sdtype, ptmp,
rcount, rdtype); rcount, rdtype);
} else { } else {
err = mca_pml.pml_recv(ptmp, rcount, rdtype, i, err = MCA_PML_CALL(recv(ptmp, rcount, rdtype, i,
MCA_COLL_BASE_TAG_GATHER, MCA_COLL_BASE_TAG_GATHER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
@ -120,9 +120,9 @@ int mca_coll_basic_gather_inter(void *sbuf, int scount,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* Everyone but root sends data and returns. */ /* Everyone but root sends data and returns. */
err = mca_pml.pml_send(sbuf, scount, sdtype, root, err = MCA_PML_CALL(send(sbuf, scount, sdtype, root,
MCA_COLL_BASE_TAG_GATHER, MCA_COLL_BASE_TAG_GATHER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
else { else {
/* I am the root, loop receiving the data. */ /* I am the root, loop receiving the data. */
@ -133,9 +133,9 @@ int mca_coll_basic_gather_inter(void *sbuf, int scount,
incr = extent * rcount; incr = extent * rcount;
for (i = 0, ptmp = (char *) rbuf; i < size; ++i, ptmp += incr) { for (i = 0, ptmp = (char *) rbuf; i < size; ++i, ptmp += incr) {
err = mca_pml.pml_recv(ptmp, rcount, rdtype, i, err = MCA_PML_CALL(recv(ptmp, rcount, rdtype, i,
MCA_COLL_BASE_TAG_GATHER, MCA_COLL_BASE_TAG_GATHER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -59,9 +59,9 @@ int mca_coll_basic_gatherv_intra(void *sbuf, int scount,
get here if scount > 0 or rank == root. */ get here if scount > 0 or rank == root. */
if (rank != root) { if (rank != root) {
err = mca_pml.pml_send(sbuf, scount, sdtype, root, err = MCA_PML_CALL(send(sbuf, scount, sdtype, root,
MCA_COLL_BASE_TAG_GATHERV, MCA_COLL_BASE_TAG_GATHERV,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
return err; return err;
} }
@ -84,9 +84,9 @@ int mca_coll_basic_gatherv_intra(void *sbuf, int scount,
err = ompi_ddt_sndrcv(sbuf, scount, sdtype, err = ompi_ddt_sndrcv(sbuf, scount, sdtype,
ptmp, rcounts[i], rdtype); ptmp, rcounts[i], rdtype);
} else { } else {
err = mca_pml.pml_recv(ptmp, rcounts[i], rdtype, i, err = MCA_PML_CALL(recv(ptmp, rcounts[i], rdtype, i,
MCA_COLL_BASE_TAG_GATHERV, MCA_COLL_BASE_TAG_GATHERV,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
@ -134,9 +134,9 @@ int mca_coll_basic_gatherv_inter(void *sbuf, int scount,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* Everyone but root sends data and returns. */ /* Everyone but root sends data and returns. */
err = mca_pml.pml_send(sbuf, scount, sdtype, root, err = MCA_PML_CALL(send(sbuf, scount, sdtype, root,
MCA_COLL_BASE_TAG_GATHERV, MCA_COLL_BASE_TAG_GATHERV,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
else { else {
/* I am the root, loop receiving data. */ /* I am the root, loop receiving data. */
@ -151,9 +151,9 @@ int mca_coll_basic_gatherv_inter(void *sbuf, int scount,
} }
ptmp = ((char *) rbuf) + (extent * disps[i]); ptmp = ((char *) rbuf) + (extent * disps[i]);
err = mca_pml.pml_irecv(ptmp, rcounts[i], rdtype, i, err = MCA_PML_CALL(irecv(ptmp, rcounts[i], rdtype, i,
MCA_COLL_BASE_TAG_GATHERV, MCA_COLL_BASE_TAG_GATHERV,
comm, &reqs[i]); comm, &reqs[i]));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -55,9 +55,9 @@ int mca_coll_basic_reduce_lin_intra(void *sbuf, void *rbuf, int count,
/* If not root, send data to the root. */ /* If not root, send data to the root. */
if (rank != root) { if (rank != root) {
err = mca_pml.pml_send(sbuf, count, dtype, root, err = MCA_PML_CALL(send(sbuf, count, dtype, root,
MCA_COLL_BASE_TAG_REDUCE, MCA_COLL_BASE_TAG_REDUCE,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
return err; return err;
} }
@ -202,8 +202,8 @@ int mca_coll_basic_reduce_lin_intra(void *sbuf, void *rbuf, int count,
if (rank == (size - 1)) { if (rank == (size - 1)) {
err = ompi_ddt_sndrcv(sbuf, count, dtype, rbuf, count, dtype); err = ompi_ddt_sndrcv(sbuf, count, dtype, rbuf, count, dtype);
} else { } else {
err = mca_pml.pml_recv(rbuf, count, dtype, size - 1, err = MCA_PML_CALL(recv(rbuf, count, dtype, size - 1,
MCA_COLL_BASE_TAG_REDUCE, comm, MPI_STATUS_IGNORE); MCA_COLL_BASE_TAG_REDUCE, comm, MPI_STATUS_IGNORE));
} }
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
@ -218,9 +218,9 @@ int mca_coll_basic_reduce_lin_intra(void *sbuf, void *rbuf, int count,
if (rank == i) { if (rank == i) {
inbuf = sbuf; inbuf = sbuf;
} else { } else {
err = mca_pml.pml_recv(pml_buffer, count, dtype, i, err = MCA_PML_CALL(recv(pml_buffer, count, dtype, i,
MCA_COLL_BASE_TAG_REDUCE, comm, MCA_COLL_BASE_TAG_REDUCE, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);
@ -320,9 +320,9 @@ int mca_coll_basic_reduce_log_intra(void *sbuf, void *rbuf, int count,
peer = (peer + root) % size; peer = (peer + root) % size;
} }
err = mca_pml.pml_send( snd_buffer, count, err = MCA_PML_CALL(send( snd_buffer, count,
dtype, peer, MCA_COLL_BASE_TAG_REDUCE, dtype, peer, MCA_COLL_BASE_TAG_REDUCE,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);
@ -354,9 +354,9 @@ int mca_coll_basic_reduce_log_intra(void *sbuf, void *rbuf, int count,
the data in the pml_buffer and then apply to operation the data in the pml_buffer and then apply to operation
between this buffer and the user provided data. */ between this buffer and the user provided data. */
err = mca_pml.pml_recv( rcv_buffer, count, dtype, peer, err = MCA_PML_CALL(recv( rcv_buffer, count, dtype, peer,
MCA_COLL_BASE_TAG_REDUCE, comm, MCA_COLL_BASE_TAG_REDUCE, comm,
MPI_STATUS_IGNORE ); MPI_STATUS_IGNORE ));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);
@ -396,14 +396,14 @@ int mca_coll_basic_reduce_log_intra(void *sbuf, void *rbuf, int count,
if (root == rank) { if (root == rank) {
ompi_ddt_sndrcv( snd_buffer, count, dtype, rbuf, count, dtype); ompi_ddt_sndrcv( snd_buffer, count, dtype, rbuf, count, dtype);
} else { } else {
err = mca_pml.pml_send( snd_buffer, count, err = MCA_PML_CALL(send( snd_buffer, count,
dtype, root, MCA_COLL_BASE_TAG_REDUCE, dtype, root, MCA_COLL_BASE_TAG_REDUCE,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
} else if (rank == root) { } else if (rank == root) {
err = mca_pml.pml_recv( rcv_buffer, count, dtype, 0, err = MCA_PML_CALL(recv( rcv_buffer, count, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE, MCA_COLL_BASE_TAG_REDUCE,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
if( rcv_buffer != rbuf ) { if( rcv_buffer != rbuf ) {
ompi_op_reduce(op, rcv_buffer, rbuf, count, dtype); ompi_op_reduce(op, rcv_buffer, rbuf, count, dtype);
} }
@ -449,9 +449,9 @@ int mca_coll_basic_reduce_lin_inter(void *sbuf, void *rbuf, int count,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* If not root, send data to the root. */ /* If not root, send data to the root. */
err = mca_pml.pml_send(sbuf, count, dtype, root, err = MCA_PML_CALL(send(sbuf, count, dtype, root,
MCA_COLL_BASE_TAG_REDUCE, MCA_COLL_BASE_TAG_REDUCE,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
else { else {
/* Root receives and reduces messages */ /* Root receives and reduces messages */
@ -466,9 +466,9 @@ int mca_coll_basic_reduce_lin_inter(void *sbuf, void *rbuf, int count,
/* Initialize the receive buffer. */ /* Initialize the receive buffer. */
err = mca_pml.pml_recv(rbuf, count, dtype, 0, err = MCA_PML_CALL(recv(rbuf, count, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE, comm, MCA_COLL_BASE_TAG_REDUCE, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);
@ -478,9 +478,9 @@ int mca_coll_basic_reduce_lin_inter(void *sbuf, void *rbuf, int count,
/* Loop receiving and calling reduction function (C or Fortran). */ /* Loop receiving and calling reduction function (C or Fortran). */
for (i = 1; i < size; i++) { for (i = 1; i < size; i++) {
err = mca_pml.pml_recv(pml_buffer, count, dtype, i, err = MCA_PML_CALL(recv(pml_buffer, count, dtype, i,
MCA_COLL_BASE_TAG_REDUCE, comm, MCA_COLL_BASE_TAG_REDUCE, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);

Просмотреть файл

@ -168,17 +168,17 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
} }
/* Do a send-recv between the two root procs. to avoid deadlock */ /* Do a send-recv between the two root procs. to avoid deadlock */
err = mca_pml.pml_isend (sbuf, totalcounts, dtype, 0, err = MCA_PML_CALL(isend (sbuf, totalcounts, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
MCA_PML_BASE_SEND_STANDARD, MCA_PML_BASE_SEND_STANDARD,
comm, &req ); comm, &req ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
err = mca_pml.pml_recv(tmpbuf2, totalcounts, dtype, 0, err = MCA_PML_CALL(recv(tmpbuf2, totalcounts, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, comm, MCA_COLL_BASE_TAG_REDUCE_SCATTER, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
goto exit; goto exit;
} }
@ -194,9 +194,9 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
tmpbuf2. tmpbuf2.
*/ */
for (i = 1; i < rsize; i++) { for (i = 1; i < rsize; i++) {
err = mca_pml.pml_recv(tmpbuf, totalcounts, dtype, i, err = MCA_PML_CALL(recv(tmpbuf, totalcounts, dtype, i,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, comm, MCA_COLL_BASE_TAG_REDUCE_SCATTER, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
goto exit; goto exit;
} }
@ -207,9 +207,9 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
} }
else { else {
/* If not root, send data to the root. */ /* If not root, send data to the root. */
err = mca_pml.pml_send(sbuf, totalcounts, dtype, root, err = MCA_PML_CALL(send(sbuf, totalcounts, dtype, root,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -224,16 +224,16 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
/***************************************************************************/ /***************************************************************************/
if ( rank == root ) { if ( rank == root ) {
/* sendrecv between the two roots */ /* sendrecv between the two roots */
err = mca_pml.pml_irecv (tmpbuf, totalcounts, dtype, 0, err = MCA_PML_CALL(irecv (tmpbuf, totalcounts, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
comm, &req); comm, &req));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
err = mca_pml.pml_send (tmpbuf2, totalcounts, dtype, 0, err = MCA_PML_CALL(send (tmpbuf2, totalcounts, dtype, 0,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
MCA_PML_BASE_SEND_STANDARD, comm ); MCA_PML_BASE_SEND_STANDARD, comm ));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -248,17 +248,17 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
has already the correct data AND we avoid a potential has already the correct data AND we avoid a potential
deadlock here. deadlock here.
*/ */
err = mca_pml.pml_irecv (rbuf, rcounts[rank], dtype, root, err = MCA_PML_CALL(irecv (rbuf, rcounts[rank], dtype, root,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
comm, &req); comm, &req));
tcount = 0; tcount = 0;
for ( i=0; i<rsize; i++ ) { for ( i=0; i<rsize; i++ ) {
tbuf = (char *) tmpbuf + tcount *extent; tbuf = (char *) tmpbuf + tcount *extent;
err = mca_pml.pml_isend (tbuf, rcounts[i], dtype,i, err = MCA_PML_CALL(isend (tbuf, rcounts[i], dtype,i,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
MCA_PML_BASE_SEND_STANDARD, comm, MCA_PML_BASE_SEND_STANDARD, comm,
reqs++); reqs++));
if ( OMPI_SUCCESS != err ) { if ( OMPI_SUCCESS != err ) {
goto exit; goto exit;
} }
@ -277,9 +277,9 @@ int mca_coll_basic_reduce_scatter_inter(void *sbuf, void *rbuf, int *rcounts,
} }
} }
else { else {
err = mca_pml.pml_recv (rbuf, rcounts[rank], dtype, root, err = MCA_PML_CALL(recv (rbuf, rcounts[rank], dtype, root,
MCA_COLL_BASE_TAG_REDUCE_SCATTER, MCA_COLL_BASE_TAG_REDUCE_SCATTER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
exit: exit:

Просмотреть файл

@ -91,9 +91,9 @@ int mca_coll_basic_scan_intra(void *sbuf, void *rbuf, int count,
/* Receive the prior answer */ /* Receive the prior answer */
err = mca_pml.pml_recv(pml_buffer, count, dtype, err = MCA_PML_CALL(recv(pml_buffer, count, dtype,
rank - 1, MCA_COLL_BASE_TAG_SCAN, comm, rank - 1, MCA_COLL_BASE_TAG_SCAN, comm,
MPI_STATUS_IGNORE); MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
if (NULL != free_buffer) { if (NULL != free_buffer) {
free(free_buffer); free(free_buffer);
@ -115,9 +115,9 @@ int mca_coll_basic_scan_intra(void *sbuf, void *rbuf, int count,
/* Send result to next process. */ /* Send result to next process. */
if (rank < (size - 1)) { if (rank < (size - 1)) {
return mca_pml.pml_send(rbuf, count, dtype, rank + 1, return MCA_PML_CALL(send(rbuf, count, dtype, rank + 1,
MCA_COLL_BASE_TAG_SCAN, MCA_COLL_BASE_TAG_SCAN,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
/* All done */ /* All done */

Просмотреть файл

@ -55,9 +55,9 @@ int mca_coll_basic_scatter_intra(void *sbuf, int scount,
/* If not root, receive data. */ /* If not root, receive data. */
if (rank != root) { if (rank != root) {
err = mca_pml.pml_recv(rbuf, rcount, rdtype, root, err = MCA_PML_CALL(recv(rbuf, rcount, rdtype, root,
MCA_COLL_BASE_TAG_SCATTER, MCA_COLL_BASE_TAG_SCATTER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
return err; return err;
} }
@ -76,9 +76,9 @@ int mca_coll_basic_scatter_intra(void *sbuf, int scount,
if (i == rank) { if (i == rank) {
err = ompi_ddt_sndrcv(ptmp, scount, sdtype, rbuf, rcount, rdtype); err = ompi_ddt_sndrcv(ptmp, scount, sdtype, rbuf, rcount, rdtype);
} else { } else {
err = mca_pml.pml_send(ptmp, scount, sdtype, i, err = MCA_PML_CALL(send(ptmp, scount, sdtype, i,
MCA_COLL_BASE_TAG_SCATTER, MCA_COLL_BASE_TAG_SCATTER,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
@ -125,9 +125,9 @@ int mca_coll_basic_scatter_inter(void *sbuf, int scount,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* If not root, receive data. */ /* If not root, receive data. */
err = mca_pml.pml_recv(rbuf, rcount, rdtype, root, err = MCA_PML_CALL(recv(rbuf, rcount, rdtype, root,
MCA_COLL_BASE_TAG_SCATTER, MCA_COLL_BASE_TAG_SCATTER,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
else{ else{
/* I am the root, loop sending data. */ /* I am the root, loop sending data. */
@ -138,9 +138,9 @@ int mca_coll_basic_scatter_inter(void *sbuf, int scount,
incr *= scount; incr *= scount;
for (i = 0, ptmp = (char *) sbuf; i < size; ++i, ptmp += incr) { for (i = 0, ptmp = (char *) sbuf; i < size; ++i, ptmp += incr) {
err = mca_pml.pml_isend(ptmp, scount, sdtype, i, err = MCA_PML_CALL(isend(ptmp, scount, sdtype, i,
MCA_COLL_BASE_TAG_SCATTER, MCA_COLL_BASE_TAG_SCATTER,
MCA_PML_BASE_SEND_STANDARD, comm, reqs++); MCA_PML_BASE_SEND_STANDARD, comm, reqs++));
if (OMPI_SUCCESS != err) { if (OMPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -55,9 +55,9 @@ int mca_coll_basic_scatterv_intra(void *sbuf, int *scounts,
rcount > 0 or rank == root. */ rcount > 0 or rank == root. */
if (rank != root) { if (rank != root) {
err = mca_pml.pml_recv(rbuf, rcount, rdtype, err = MCA_PML_CALL(recv(rbuf, rcount, rdtype,
root, MCA_COLL_BASE_TAG_SCATTERV, root, MCA_COLL_BASE_TAG_SCATTERV,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
return err; return err;
} }
@ -79,9 +79,9 @@ int mca_coll_basic_scatterv_intra(void *sbuf, int *scounts,
if (i == rank) { if (i == rank) {
err = ompi_ddt_sndrcv(ptmp, scounts[i], sdtype, rbuf, rcount, rdtype); err = ompi_ddt_sndrcv(ptmp, scounts[i], sdtype, rbuf, rcount, rdtype);
} else { } else {
err = mca_pml.pml_send(ptmp, scounts[i], sdtype, i, err = MCA_PML_CALL(send(ptmp, scounts[i], sdtype, i,
MCA_COLL_BASE_TAG_SCATTERV, MCA_COLL_BASE_TAG_SCATTERV,
MCA_PML_BASE_SEND_STANDARD, comm); MCA_PML_BASE_SEND_STANDARD, comm));
} }
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
@ -130,9 +130,9 @@ int mca_coll_basic_scatterv_inter(void *sbuf, int *scounts,
} }
else if ( MPI_ROOT != root ) { else if ( MPI_ROOT != root ) {
/* If not root, receive data. */ /* If not root, receive data. */
err = mca_pml.pml_recv(rbuf, rcount, rdtype, err = MCA_PML_CALL(recv(rbuf, rcount, rdtype,
root, MCA_COLL_BASE_TAG_SCATTERV, root, MCA_COLL_BASE_TAG_SCATTERV,
comm, MPI_STATUS_IGNORE); comm, MPI_STATUS_IGNORE));
} }
else { else {
/* I am the root, loop sending data. */ /* I am the root, loop sending data. */
@ -147,9 +147,9 @@ int mca_coll_basic_scatterv_inter(void *sbuf, int *scounts,
} }
ptmp = ((char *) sbuf) + (extent * disps[i]); ptmp = ((char *) sbuf) + (extent * disps[i]);
err = mca_pml.pml_isend(ptmp, scounts[i], sdtype, i, err = MCA_PML_CALL(isend(ptmp, scounts[i], sdtype, i,
MCA_COLL_BASE_TAG_SCATTERV, MCA_COLL_BASE_TAG_SCATTERV,
MCA_PML_BASE_SEND_STANDARD, comm, reqs++); MCA_PML_BASE_SEND_STANDARD, comm, reqs++));
if (MPI_SUCCESS != err) { if (MPI_SUCCESS != err) {
return err; return err;
} }

Просмотреть файл

@ -80,7 +80,7 @@ int orte_errmgr_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("errmgr", 0, mca_errmgr_base_static_components, mca_base_components_open("errmgr", 0, mca_errmgr_base_static_components,
&orte_errmgr_base_components_available)) { &orte_errmgr_base_components_available, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }

Просмотреть файл

@ -299,7 +299,7 @@ int orte_gpr_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("gpr", 0, mca_gpr_base_static_components, mca_base_components_open("gpr", 0, mca_gpr_base_static_components,
&orte_gpr_base_components_available)) { &orte_gpr_base_components_available, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }

Просмотреть файл

@ -82,7 +82,7 @@ int mca_io_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("io", mca_io_base_output, mca_base_components_open("io", mca_io_base_output,
mca_io_base_static_components, mca_io_base_static_components,
&mca_io_base_components_opened)) { &mca_io_base_components_opened, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }
mca_io_base_components_opened_valid = true; mca_io_base_components_opened_valid = true;

Просмотреть файл

@ -95,7 +95,8 @@ int orte_iof_base_open(void)
/* Open up all available components */ /* Open up all available components */
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("iof", 0, mca_iof_base_static_components, mca_base_components_open("iof", 0, mca_iof_base_static_components,
&orte_iof_base.iof_components_opened)) { &orte_iof_base.iof_components_opened,
true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -56,7 +56,7 @@ int mca_mpool_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("mpool", 0, mca_mpool_base_static_components, mca_base_components_open("mpool", 0, mca_mpool_base_static_components,
&mca_mpool_base_components)) { &mca_mpool_base_components, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -121,10 +121,10 @@ int orte_ns_base_open(void)
} }
/* Open up all available components */ /* Open up all available components */
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("ns", 0, mca_ns_base_static_components, mca_base_components_open("ns", 0, mca_ns_base_static_components,
&mca_ns_base_components_available)) { &mca_ns_base_components_available, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }

Просмотреть файл

@ -55,7 +55,7 @@ int mca_oob_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("oob", 0, mca_oob_base_static_components, mca_base_components_open("oob", 0, mca_oob_base_static_components,
&mca_oob_base_components)) { &mca_oob_base_components, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -63,7 +63,7 @@ int orte_pls_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("pls", 0, mca_pls_base_static_components, mca_base_components_open("pls", 0, mca_pls_base_static_components,
&orte_pls_base.pls_opened)) { &orte_pls_base.pls_opened, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }
orte_pls_base.pls_opened_valid = true; orte_pls_base.pls_opened_valid = true;

Просмотреть файл

@ -22,12 +22,14 @@ DIST_SUBDIRS = base $(MCA_pml_ALL_SUBDIRS)
# Source code files # Source code files
headers = pml.h headers = pml.h
nodist_headers = pml_direct_call.h
# Conditionally install the header files # Conditionally install the header files
if WANT_INSTALL_HEADERS if WANT_INSTALL_HEADERS
ompidir = $(includedir)/openmpi/mca/pml ompidir = $(includedir)/openmpi/mca/pml
ompi_HEADERS = $(headers) dist_ompi_HEADERS = $(headers)
nodist_ompi_HEADERS = $(nodist_headers)
else else
ompidir = $(includedir) ompidir = $(includedir)
endif endif

Просмотреть файл

@ -75,7 +75,8 @@ int mca_pml_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("pml", 0, mca_pml_base_static_components, mca_base_components_open("pml", 0, mca_pml_base_static_components,
&mca_pml_base_components_available)) { &mca_pml_base_components_available,
!MCA_pml_DIRECT_CALL)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -17,6 +17,7 @@
#include "ompi_config.h" #include "ompi_config.h"
#include "include/types.h" #include "include/types.h"
#include "mca/pml/pml.h"
#include "mca/pml/base/pml_base_recvreq.h" #include "mca/pml/base/pml_base_recvreq.h"

Просмотреть файл

@ -16,6 +16,7 @@
/*%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*/ /*%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*/
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "mca/pml/base/pml_base_request.h" #include "mca/pml/base/pml_base_request.h"
static void mca_pml_base_request_construct(mca_pml_base_request_t* req) static void mca_pml_base_request_construct(mca_pml_base_request_t* req)

Просмотреть файл

@ -15,6 +15,7 @@
*/ */
#include "ompi_config.h" #include "ompi_config.h"
#include <string.h> #include <string.h>
#include "mca/pml/pml.h"
#include "mca/pml/base/pml_base_sendreq.h" #include "mca/pml/base/pml_base_sendreq.h"
static void mca_pml_base_send_request_construct(mca_pml_base_send_request_t* req); static void mca_pml_base_send_request_construct(mca_pml_base_send_request_t* req);

Просмотреть файл

@ -20,6 +20,7 @@
#define MCA_PML_BASE_SEND_REQUEST_H #define MCA_PML_BASE_SEND_REQUEST_H
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "datatype/datatype.h" #include "datatype/datatype.h"
#include "mca/pml/base/pml_base_request.h" #include "mca/pml/base/pml_base_request.h"

Просмотреть файл

@ -488,7 +488,6 @@ struct mca_pml_base_module_1_0_0_t {
}; };
typedef struct mca_pml_base_module_1_0_0_t mca_pml_base_module_1_0_0_t; typedef struct mca_pml_base_module_1_0_0_t mca_pml_base_module_1_0_0_t;
typedef mca_pml_base_module_1_0_0_t mca_pml_base_module_t; typedef mca_pml_base_module_1_0_0_t mca_pml_base_module_t;
OMPI_DECLSPEC extern mca_pml_base_module_t mca_pml;
/* /*
* Macro for use in components that are of type pml v1.0.0 * Macro for use in components that are of type pml v1.0.0
@ -499,6 +498,22 @@ OMPI_DECLSPEC extern mca_pml_base_module_t mca_pml;
/* pml v1.0 */ \ /* pml v1.0 */ \
"pml", 1, 0, 0 "pml", 1, 0, 0
/*
* macro for doing direct call / call through struct
*/
#if MCA_pml_DIRECT_CALL
#include "mca/pml/pml_direct_call.h"
#define MCA_PML_CALL_STAMP(a, b) mca_pml_ ## a ## _ ## b
#define MCA_PML_CALL_EXPANDER(a, b) MCA_PML_CALL_STAMP(a,b)
#define MCA_PML_CALL(a) MCA_PML_CALL_EXPANDER(MCA_pml_DIRECT_CALL_COMPONENT, a)
#else
#define MCA_PML_CALL(a) mca_pml.pml_ ## a
OMPI_DECLSPEC extern mca_pml_base_module_t mca_pml;
#endif
#if defined(c_plusplus) || defined(__cplusplus) #if defined(c_plusplus) || defined(__cplusplus)
} }
#endif #endif

25
src/mca/pml/pml_direct_call.h.template.in Обычный файл
Просмотреть файл

@ -0,0 +1,25 @@
/*
* Copyright (c) 2004-2005 The Trustees of Indiana University.
* All rights reserved.
* Copyright (c) 2004-2005 The Trustees of the University of Tennessee.
* All rights reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#ifndef MCA_PML_DIRECT_CALL_H_
#define MCA_PML_DIRECT_CALL_H_
#if MCA_pml_DIRECT_CALL
#include @MCA_pml_DIRECT_CALL_HEADER@
#endif
#endif

1
src/mca/pml/teg/post_configure.sh Обычный файл
Просмотреть файл

@ -0,0 +1 @@
DIRECT_CALL_HEADER="mca/pml/teg/src/pml_teg.h"

Просмотреть файл

@ -22,6 +22,7 @@ libmca_pml_teg_la_SOURCES = \
pml_teg.h \ pml_teg.h \
pml_teg_cancel.c \ pml_teg_cancel.c \
pml_teg_component.c \ pml_teg_component.c \
pml_teg_component.h \
pml_teg_iprobe.c \ pml_teg_iprobe.c \
pml_teg_irecv.c \ pml_teg_irecv.c \
pml_teg_isend.c \ pml_teg_isend.c \

Просмотреть файл

@ -17,6 +17,8 @@
#include "ompi_config.h" #include "ompi_config.h"
#include <string.h> #include <string.h>
#include "mca/pml/pml.h"
#include "pml_ptl_array.h" #include "pml_ptl_array.h"

Просмотреть файл

@ -28,6 +28,7 @@
#include "mca/ptl/base/ptl_base_recvfrag.h" #include "mca/ptl/base/ptl_base_recvfrag.h"
#include "mca/ptl/base/ptl_base_sendfrag.h" #include "mca/ptl/base/ptl_base_sendfrag.h"
#include "pml_teg.h" #include "pml_teg.h"
#include "pml_teg_component.h"
#include "pml_teg_proc.h" #include "pml_teg_proc.h"
#include "pml_teg_ptl.h" #include "pml_teg_ptl.h"
#include "pml_teg_recvreq.h" #include "pml_teg_recvreq.h"

Просмотреть файл

@ -72,8 +72,6 @@ extern mca_pml_teg_t mca_pml_teg;
* PML module functions. * PML module functions.
*/ */
OMPI_COMP_EXPORT extern mca_pml_base_component_1_0_0_t mca_pml_teg_component;
extern int mca_pml_teg_component_open(void); extern int mca_pml_teg_component_open(void);
extern int mca_pml_teg_component_close(void); extern int mca_pml_teg_component_close(void);

27
src/mca/pml/teg/src/pml_teg_component.h Обычный файл
Просмотреть файл

@ -0,0 +1,27 @@
/*
* Copyright (c) 2004-2005 The Trustees of Indiana University.
* All rights reserved.
* Copyright (c) 2004-2005 The Trustees of the University of Tennessee.
* All rights reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
/**
* @file
*/
#ifndef MCA_PML_TEG_COMPONENT_H
#define MCA_PML_TEG_COMPONENT_H
/*
* PML module functions.
*/
OMPI_COMP_EXPORT extern mca_pml_base_component_1_0_0_t mca_pml_teg_component;
#endif

Просмотреть файл

@ -17,6 +17,7 @@
#ifndef _MCA_PML_BASE_PTL_ #ifndef _MCA_PML_BASE_PTL_
#define _MCA_PML_BASE_PTL_ #define _MCA_PML_BASE_PTL_
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "threads/condition.h" #include "threads/condition.h"
#if defined(c_plusplus) || defined(__cplusplus) #if defined(c_plusplus) || defined(__cplusplus)

Просмотреть файл

@ -20,6 +20,7 @@
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "pml_teg_recvfrag.h" #include "pml_teg_recvfrag.h"
#include "pml_teg_proc.h" #include "pml_teg_proc.h"

Просмотреть файл

@ -16,6 +16,7 @@
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "mca/ptl/base/ptl_base_comm.h" #include "mca/ptl/base/ptl_base_comm.h"
#include "pml_teg_recvreq.h" #include "pml_teg_recvreq.h"

Просмотреть файл

@ -19,6 +19,7 @@
#include "ompi_config.h" #include "ompi_config.h"
#include "include/constants.h" #include "include/constants.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "pml_teg.h" #include "pml_teg.h"
#include "pml_teg_proc.h" #include "pml_teg_proc.h"

Просмотреть файл

@ -22,6 +22,7 @@
#include "event/event.h" #include "event/event.h"
#include "mca/mca.h" #include "mca/mca.h"
#include "mca/base/base.h" #include "mca/base/base.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "mca/ptl/base/base.h" #include "mca/ptl/base/base.h"

Просмотреть файл

@ -16,6 +16,8 @@
#include "ompi_config.h" #include "ompi_config.h"
#include <string.h> #include <string.h>
#include "mca/pml/pml.h"
#include "mca/ptl/base/ptl_base_comm.h" #include "mca/ptl/base/ptl_base_comm.h"
static void mca_pml_ptl_comm_construct(mca_pml_ptl_comm_t* comm); static void mca_pml_ptl_comm_construct(mca_pml_ptl_comm_t* comm);

Просмотреть файл

@ -21,6 +21,7 @@
#include "mca/mca.h" #include "mca/mca.h"
#include "mca/base/base.h" #include "mca/base/base.h"
#include "mca/base/mca_base_param.h" #include "mca/base/mca_base_param.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "mca/ptl/base/base.h" #include "mca/ptl/base/base.h"
@ -54,7 +55,7 @@ int mca_ptl_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("ptl", 0, mca_ptl_base_static_components, mca_base_components_open("ptl", 0, mca_ptl_base_static_components,
&mca_ptl_base_components_opened)) { &mca_ptl_base_components_opened, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -16,6 +16,8 @@
/*%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*/ /*%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*/
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "mca/ptl/base/ptl_base_recvfrag.h" #include "mca/ptl/base/ptl_base_recvfrag.h"
#include "mca/ptl/base/ptl_base_match.h" #include "mca/ptl/base/ptl_base_match.h"

Просмотреть файл

@ -145,7 +145,7 @@ int mca_ptl_base_select(bool enable_progress_threads,
/* Once we have some modules, tell the PML about them */ /* Once we have some modules, tell the PML about them */
mca_pml.pml_add_ptls(&mca_ptl_base_modules_initialized); MCA_PML_CALL(add_ptls(&mca_ptl_base_modules_initialized));
/* All done */ /* All done */

Просмотреть файл

@ -15,6 +15,8 @@
*/ */
#include "ompi_config.h" #include "ompi_config.h"
#include "mca/pml/pml.h"
#include "mca/ptl/base/ptl_base_sendfrag.h" #include "mca/ptl/base/ptl_base_sendfrag.h"
static void mca_ptl_base_send_frag_construct(mca_ptl_base_send_frag_t* frag); static void mca_ptl_base_send_frag_construct(mca_ptl_base_send_frag_t* frag);

Просмотреть файл

@ -29,6 +29,9 @@
#include "ptl_gm_priv.h" #include "ptl_gm_priv.h"
#include "mca/pml/teg/src/pml_teg_proc.h" #include "mca/pml/teg/src/pml_teg_proc.h"
static int mca_ptl_gm_send_quick_fin_message( struct mca_ptl_gm_peer_t* ptl_peer,
struct mca_ptl_base_frag_t* frag );
static void mca_ptl_gm_basic_frag_callback( struct gm_port* port, void* context, gm_status_t status ) static void mca_ptl_gm_basic_frag_callback( struct gm_port* port, void* context, gm_status_t status )
{ {
mca_ptl_gm_module_t* gm_ptl; mca_ptl_gm_module_t* gm_ptl;

Просмотреть файл

@ -239,11 +239,16 @@
* *
*/ */
/* Thses are unprotected because if the pml is direct called, pml.h
has a dependencies on ptl.h and must have ptl.h fully included
before pml.h is parsed. It's weird, but there isn't a better way
without doing some strange type forward declarations. */
#include "mca/mca.h"
#include "mca/pml/pml.h"
#ifndef MCA_PTL_H #ifndef MCA_PTL_H
#define MCA_PTL_H #define MCA_PTL_H
#include "mca/mca.h"
#include "mca/pml/pml.h"
#include "include/types.h" #include "include/types.h"
/* /*

Просмотреть файл

@ -107,7 +107,7 @@ int orte_ras_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("ras", 0, mca_ras_base_static_components, mca_base_components_open("ras", 0, mca_ras_base_static_components,
&orte_ras_base.ras_opened)) { &orte_ras_base.ras_opened, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }
OBJ_CONSTRUCT(&orte_ras_base.ras_available, ompi_list_t); OBJ_CONSTRUCT(&orte_ras_base.ras_available, ompi_list_t);

Просмотреть файл

@ -67,7 +67,7 @@ int orte_rds_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("rds", 0, mca_rds_base_static_components, mca_base_components_open("rds", 0, mca_rds_base_static_components,
&orte_rds_base.rds_components)) { &orte_rds_base.rds_components, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }
OBJ_CONSTRUCT(&orte_rds_base.rds_selected, ompi_list_t); OBJ_CONSTRUCT(&orte_rds_base.rds_selected, ompi_list_t);

Просмотреть файл

@ -76,7 +76,7 @@ int orte_rmaps_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("rmaps", 0, mca_rmaps_base_static_components, mca_base_components_open("rmaps", 0, mca_rmaps_base_static_components,
&orte_rmaps_base.rmaps_opened)) { &orte_rmaps_base.rmaps_opened, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }

Просмотреть файл

@ -161,7 +161,7 @@ int orte_rmgr_base_open(void)
if (ORTE_SUCCESS != if (ORTE_SUCCESS !=
mca_base_components_open("rmgr", 0, mca_rmgr_base_static_components, mca_base_components_open("rmgr", 0, mca_rmgr_base_static_components,
&orte_rmgr_base.rmgr_components)) { &orte_rmgr_base.rmgr_components, true)) {
return ORTE_ERROR; return ORTE_ERROR;
} }

Просмотреть файл

@ -64,7 +64,7 @@ int orte_rml_base_open(void)
/* Open up all available components */ /* Open up all available components */
if ((rc = mca_base_components_open("rml", 0, mca_rml_base_static_components, if ((rc = mca_base_components_open("rml", 0, mca_rml_base_static_components,
&orte_rml_base.rml_components)) != OMPI_SUCCESS) { &orte_rml_base.rml_components, true)) != OMPI_SUCCESS) {
return rc; return rc;
} }
return OMPI_SUCCESS; return OMPI_SUCCESS;

Просмотреть файл

@ -84,7 +84,7 @@ int orte_soh_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("soh", 0, mca_soh_base_static_components, mca_base_components_open("soh", 0, mca_soh_base_static_components,
&orte_soh_base.soh_components)) { &orte_soh_base.soh_components, true)) {
/* fprintf(stderr,"orte_soh_base_open:failed\n"); */ /* fprintf(stderr,"orte_soh_base_open:failed\n"); */
return OMPI_ERROR; return OMPI_ERROR;

Просмотреть файл

@ -61,7 +61,7 @@ int mca_topo_base_open(void)
if (OMPI_SUCCESS != if (OMPI_SUCCESS !=
mca_base_components_open("topo", mca_topo_base_output, mca_base_components_open("topo", mca_topo_base_output,
mca_topo_base_static_components, mca_topo_base_static_components,
&mca_topo_base_components_opened)) { &mca_topo_base_components_opened, true)) {
return OMPI_ERROR; return OMPI_ERROR;
} }

Просмотреть файл

@ -59,7 +59,7 @@ int MPI_Bsend(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Co
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf, count, type, dest, tag, MCA_PML_BASE_SEND_BUFFERED, comm, &request); rc = MCA_PML_CALL(isend_init(buf, count, type, dest, tag, MCA_PML_BASE_SEND_BUFFERED, comm, &request));
if(OMPI_SUCCESS != rc) if(OMPI_SUCCESS != rc)
goto error_return; goto error_return;
@ -69,7 +69,7 @@ int MPI_Bsend(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Co
goto error_return; goto error_return;
} }
rc = mca_pml.pml_start(1, &request); rc = MCA_PML_CALL(start(1, &request));
if(OMPI_SUCCESS != rc) { if(OMPI_SUCCESS != rc) {
ompi_request_free(&request); ompi_request_free(&request);
goto error_return; goto error_return;

Просмотреть файл

@ -60,7 +60,7 @@ int MPI_Bsend_init(void *buf, int count, MPI_Datatype type,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_BUFFERED,comm,request); rc = MCA_PML_CALL(isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_BUFFERED,comm,request));
if(OMPI_SUCCESS != rc) if(OMPI_SUCCESS != rc)
goto error_return; goto error_return;

Просмотреть файл

@ -61,7 +61,7 @@ int MPI_Ibsend(void *buf, int count, MPI_Datatype type, int dest,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf, count, type, dest, tag, MCA_PML_BASE_SEND_BUFFERED, comm, request); rc = MCA_PML_CALL(isend_init(buf, count, type, dest, tag, MCA_PML_BASE_SEND_BUFFERED, comm, request));
if(OMPI_SUCCESS != rc) if(OMPI_SUCCESS != rc)
goto error_return; goto error_return;
@ -69,7 +69,7 @@ int MPI_Ibsend(void *buf, int count, MPI_Datatype type, int dest,
if(OMPI_SUCCESS != rc) if(OMPI_SUCCESS != rc)
goto error_return; goto error_return;
rc = mca_pml.pml_start(1, request); rc = MCA_PML_CALL(start(1, request));
if(OMPI_SUCCESS != rc) if(OMPI_SUCCESS != rc)
goto error_return; goto error_return;

Просмотреть файл

@ -90,13 +90,13 @@ int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,
MPI_Request req; MPI_Request req;
/* local leader exchange group sizes lists */ /* local leader exchange group sizes lists */
rc =mca_pml.pml_irecv (&rsize, 1, MPI_INT, rleader, tag, bridge_comm, rc = MCA_PML_CALL(irecv(&rsize, 1, MPI_INT, rleader, tag, bridge_comm,
&req ); &req));
if ( rc != MPI_SUCCESS ) { if ( rc != MPI_SUCCESS ) {
goto err_exit; goto err_exit;
} }
rc = mca_pml.pml_send ( &local_size, 1, MPI_INT, rleader, tag, rc = MCA_PML_CALL(send (&local_size, 1, MPI_INT, rleader, tag,
MCA_PML_BASE_SEND_STANDARD, bridge_comm ); MCA_PML_BASE_SEND_STANDARD, bridge_comm));
if ( rc != MPI_SUCCESS ) { if ( rc != MPI_SUCCESS ) {
goto err_exit; goto err_exit;
} }

Просмотреть файл

@ -58,7 +58,7 @@ int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_iprobe(source, tag, comm, flag, status); rc = MCA_PML_CALL(iprobe(source, tag, comm, flag, status));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -60,7 +60,7 @@ int MPI_Irecv(void *buf, int count, MPI_Datatype type, int source,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_irecv(buf,count,type,source,tag,comm,request); rc = MCA_PML_CALL(irecv(buf,count,type,source,tag,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -60,7 +60,8 @@ int MPI_Irsend(void *buf, int count, MPI_Datatype type, int dest,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend(buf,count,type,dest,tag,MCA_PML_BASE_SEND_READY,comm,request); rc = MCA_PML_CALL(isend(buf,count,type,dest,tag,
MCA_PML_BASE_SEND_READY,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -60,7 +60,7 @@ int MPI_Isend(void *buf, int count, MPI_Datatype type, int dest,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend(buf,count,type,dest,tag,MCA_PML_BASE_SEND_STANDARD,comm,request); rc = MCA_PML_CALL(isend(buf,count,type,dest,tag,MCA_PML_BASE_SEND_STANDARD,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -61,7 +61,8 @@ int MPI_Issend(void *buf, int count, MPI_Datatype type, int dest,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend(buf,count,type,dest,tag,MCA_PML_BASE_SEND_SYNCHRONOUS,comm,request); rc = MCA_PML_CALL(isend(buf,count,type,dest,tag,
MCA_PML_BASE_SEND_SYNCHRONOUS,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -57,6 +57,6 @@ int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status)
OMPI_ERRHANDLER_CHECK(rc, comm, rc, "MPI_Probe"); OMPI_ERRHANDLER_CHECK(rc, comm, rc, "MPI_Probe");
} }
rc = mca_pml.pml_probe(source, tag, comm, status); rc = MCA_PML_CALL(probe(source, tag, comm, status));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, "MPI_Probe"); OMPI_ERRHANDLER_RETURN(rc, comm, rc, "MPI_Probe");
} }

Просмотреть файл

@ -63,7 +63,7 @@ int MPI_Recv(void *buf, int count, MPI_Datatype type, int source,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_recv(buf, count, type, source, tag, comm, status); rc = MCA_PML_CALL(recv(buf, count, type, source, tag, comm, status));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -60,7 +60,7 @@ int MPI_Recv_init(void *buf, int count, MPI_Datatype type, int source,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_irecv_init(buf,count,type,source,tag,comm,request); rc = MCA_PML_CALL(irecv_init(buf,count,type,source,tag,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -56,7 +56,8 @@ int MPI_Rsend(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Co
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_send(buf, count, type, dest, tag, MCA_PML_BASE_SEND_READY, comm); rc = MCA_PML_CALL(send(buf, count, type, dest, tag,
MCA_PML_BASE_SEND_READY, comm));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -61,7 +61,8 @@ int MPI_Rsend_init(void *buf, int count, MPI_Datatype type,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_READY,comm,request); rc = MCA_PML_CALL(isend_init(buf,count,type,dest,tag,
MCA_PML_BASE_SEND_READY,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -57,7 +57,7 @@ int MPI_Send(void *buf, int count, MPI_Datatype type, int dest,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_send(buf, count, type, dest, tag, MCA_PML_BASE_SEND_STANDARD, comm); rc = MCA_PML_CALL(send(buf, count, type, dest, tag, MCA_PML_BASE_SEND_STANDARD, comm));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -61,7 +61,7 @@ int MPI_Send_init(void *buf, int count, MPI_Datatype type,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_STANDARD,comm,request); rc = MCA_PML_CALL(isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_STANDARD,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -66,14 +66,14 @@ int MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype,
} }
if (source != MPI_PROC_NULL) { /* post recv */ if (source != MPI_PROC_NULL) { /* post recv */
rc = mca_pml.pml_irecv(recvbuf, recvcount, recvtype, rc = MCA_PML_CALL(irecv(recvbuf, recvcount, recvtype,
source, recvtag, comm, &req); source, recvtag, comm, &req));
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
if (dest != MPI_PROC_NULL) { /* send */ if (dest != MPI_PROC_NULL) { /* send */
rc = mca_pml.pml_send(sendbuf, sendcount, sendtype, dest, rc = MCA_PML_CALL(send(sendbuf, sendcount, sendtype, dest,
sendtag, MCA_PML_BASE_SEND_STANDARD, comm); sendtag, MCA_PML_BASE_SEND_STANDARD, comm));
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -56,7 +56,8 @@ int MPI_Ssend(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Co
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_send(buf, count, type, dest, tag, MCA_PML_BASE_SEND_SYNCHRONOUS, comm); rc = MCA_PML_CALL(send(buf, count, type, dest, tag,
MCA_PML_BASE_SEND_SYNCHRONOUS, comm));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -61,7 +61,8 @@ int MPI_Ssend_init(void *buf, int count, MPI_Datatype type,
OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, comm, rc, FUNC_NAME);
} }
rc = mca_pml.pml_isend_init(buf,count,type,dest,tag,MCA_PML_BASE_SEND_SYNCHRONOUS,comm,request); rc = MCA_PML_CALL(isend_init(buf,count,type,dest,tag,
MCA_PML_BASE_SEND_SYNCHRONOUS,comm,request));
OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME); OMPI_ERRHANDLER_RETURN(rc, comm, rc, FUNC_NAME);
} }

Просмотреть файл

@ -45,7 +45,7 @@ int MPI_Start(MPI_Request *request)
switch((*request)->req_type) { switch((*request)->req_type) {
case OMPI_REQUEST_PML: case OMPI_REQUEST_PML:
return mca_pml.pml_start(1, request); return MCA_PML_CALL(start(1, request));
default: default:
return OMPI_SUCCESS; return OMPI_SUCCESS;
} }

Просмотреть файл

@ -44,6 +44,6 @@ int MPI_Startall(int count, MPI_Request *requests)
} }
OMPI_ERRHANDLER_CHECK(rc, MPI_COMM_WORLD, rc, FUNC_NAME); OMPI_ERRHANDLER_CHECK(rc, MPI_COMM_WORLD, rc, FUNC_NAME);
} }
return mca_pml.pml_start(count, requests); return MCA_PML_CALL(start(count, requests));
} }

Просмотреть файл

@ -38,10 +38,10 @@
#include "mca/base/base.h" #include "mca/base/base.h"
#include "mca/base/mca_base_module_exchange.h" #include "mca/base/mca_base_module_exchange.h"
#include "mca/ptl/ptl.h"
#include "mca/ptl/base/base.h"
#include "mca/pml/pml.h" #include "mca/pml/pml.h"
#include "mca/pml/base/base.h" #include "mca/pml/base/base.h"
#include "mca/ptl/ptl.h"
#include "mca/ptl/base/base.h"
#include "mca/coll/coll.h" #include "mca/coll/coll.h"
#include "mca/coll/base/base.h" #include "mca/coll/base/base.h"
#include "mca/topo/topo.h" #include "mca/topo/topo.h"

Просмотреть файл

@ -43,6 +43,7 @@
#include "mca/allocator/allocator.h" #include "mca/allocator/allocator.h"
#include "mca/mpool/base/base.h" #include "mca/mpool/base/base.h"
#include "mca/mpool/mpool.h" #include "mca/mpool/mpool.h"
#include "mca/pml/pml.h"
#include "mca/ptl/ptl.h" #include "mca/ptl/ptl.h"
#include "mca/ptl/base/base.h" #include "mca/ptl/base/base.h"
#include "mca/pml/pml.h" #include "mca/pml/pml.h"

Просмотреть файл

@ -260,7 +260,7 @@ int ompi_proc_get_proclist (orte_buffer_t* buf, int proclistsize, ompi_proc_t **
return rc; return rc;
plist[i] = ompi_proc_find_and_add ( &name, &isnew ); plist[i] = ompi_proc_find_and_add ( &name, &isnew );
if(isnew) { if(isnew) {
mca_pml.pml_add_procs(&plist[i], 1); MCA_PML_CALL(add_procs(&plist[i], 1));
} }
} }
*proclist = plist; *proclist = plist;