1
1

Merge pull request #6223 from ggouaillardet/topic/pmix_refresh

pmix/pmi4x: refresh to latest PMIx
Этот коммит содержится в:
Ralph Castain 2018-12-24 10:51:00 -08:00 коммит произвёл GitHub
родитель 96f88052e9 b0a668457c
Коммит 529e17e0d9
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
435 изменённых файлов: 8826 добавлений и 2542 удалений

Просмотреть файл

@ -8,8 +8,8 @@ Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
University of Stuttgart. All rights reserved.
Copyright (c) 2004-2005 The Regents of the University of California.
All rights reserved.
Copyright (c) 2008-2015 Cisco Systems, Inc. All rights reserved.
Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
Copyright (c) 2008-2018 Cisco Systems, Inc. All rights reserved
Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
$COPYRIGHT$
Additional copyrights may follow
@ -205,17 +205,17 @@ NOTE: On MacOS/X, the default "libtool" program is different than the
m4, Autoconf and Automake build and install very quickly; Libtool will
take a minute or two.
5. You can now run PMIxs top-level "autogen.sh" script. This script
5. You can now run PMIxs top-level "autogen.pl" script. This script
will invoke the GNU Autoconf, Automake, and Libtool commands in the
proper order and setup to run PMIx's top-level "configure" script.
5a. You generally need to run autogen.sh only when the top-level
5a. You generally need to run autogen.pl only when the top-level
file "configure.ac" changes, or any files in the config/ or
<project>/config/ directories change (these directories are
where a lot of "include" files for PMIxs configure script
live).
5b. You do *NOT* need to re-run autogen.sh if you modify a
5b. You do *NOT* need to re-run autogen.pl if you modify a
Makefile.am.
Use of Flex

Просмотреть файл

@ -8,8 +8,8 @@ Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
University of Stuttgart. All rights reserved.
Copyright (c) 2004-2005 The Regents of the University of California.
All rights reserved.
Copyright (c) 2008-2015 Cisco Systems, Inc. All rights reserved.
Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
Copyright (c) 2008-2018 Cisco Systems, Inc. All rights reserved
Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
$COPYRIGHT$
Additional copyrights may follow
@ -37,7 +37,7 @@ build PMIx. You must then run:
shell$ ./autogen.pl
You will need very recent versions of GNU Autoconf, Automake, and
Libtool. If autogen.sh fails, read the HACKING file. If anything
Libtool. If autogen.pl fails, read the HACKING file. If anything
else fails, read the HACKING file. Finally, we suggest reading the
HACKING file.

Просмотреть файл

@ -45,7 +45,7 @@ Copyright (c) 2010 ARM ltd. All rights reserved.
Copyright (c) 2010-2011 Alex Brick <bricka@ccs.neu.edu>. All rights reserved.
Copyright (c) 2012 The University of Wisconsin-La Crosse. All rights
reserved.
Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
Copyright (c) 2013-2014 Intel, Inc. All rights reserved.
Copyright (c) 2011-2014 NVIDIA Corporation. All rights reserved.
$COPYRIGHT$

Просмотреть файл

@ -15,7 +15,7 @@ Copyright (c) 2007 Myricom, Inc. All rights reserved.
Copyright (c) 2008 IBM Corporation. All rights reserved.
Copyright (c) 2010 Oak Ridge National Labs. All rights reserved.
Copyright (c) 2011 University of Houston. All rights reserved.
Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
$COPYRIGHT$
Additional copyrights may follow

Просмотреть файл

@ -30,7 +30,7 @@ greek=
# command, or with the date (if "git describe" fails) in the form of
# "date<date>".
repo_rev=git56f2d69a
repo_rev=git2d4c2874
# If tarball_version is not empty, it is used as the version string in
# the tarball filename, regardless of all other versions listed in
@ -44,7 +44,7 @@ tarball_version=
# The date when this release was created
date="Sep 28, 2018"
date="Dec 18, 2018"
# The shared library version of each of PMIx's public libraries.
# These versions are maintained in accordance with the "Library

Просмотреть файл

@ -4,7 +4,7 @@
# Copyright (c) 2010 Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2013 Mellanox Technologies, Inc.
# All rights reserved.
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
# Copyright (c) 2015 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2015 IBM Corporation. All rights reserved.

Просмотреть файл

@ -1,4 +1,4 @@
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2016 Intel, Inc. All rights reserved
# Copyright (c) 2016 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2006-2016 Cisco Systems, Inc. All rights reserved.

Просмотреть файл

@ -11,7 +11,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2009 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2015 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -15,7 +15,7 @@
# and Technology (RIST). All rights reserved.
# Copyright (c) 2015 Los Alamos National Security, LLC. All rights
# reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -11,7 +11,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2015 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Cisco Systems, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl

Просмотреть файл

@ -1,7 +1,7 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2009 Oak Ridge National Labs. All rights reserved.
dnl Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
dnl
dnl $COPYRIGHT$
dnl

Просмотреть файл

@ -10,7 +10,7 @@ dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014 Intel, Inc. All rights reserved.
dnl Copyright (c) 2016 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -1,7 +1,7 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2015-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Intel, Inc. All rights reserved
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -5,7 +5,7 @@ dnl All rights reserved.
dnl Copyright (c) 2017 IBM Corporation. All rights reserved.
dnl Copyright (c) 2017 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2017 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -1,7 +1,7 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2010 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2016 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl

Просмотреть файл

@ -12,7 +12,7 @@
# All rights reserved.
# Copyright (c) 2006 QLogic Corp. All rights reserved.
# Copyright (c) 2009-2016 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2016-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2016-2017 Intel, Inc. All rights reserved.
# Copyright (c) 2015 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2016 Los Alamos National Security, LLC. All rights

Просмотреть файл

@ -10,7 +10,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2008-2013 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2017 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -11,7 +11,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2012 Oracle and/or its affiliates. All rights reserved.
dnl Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2013 Intel, Inc. All rights reserved
dnl Copyright (c) 2015 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.

Просмотреть файл

@ -12,7 +12,7 @@
# All rights reserved.
# Copyright (c) 2006-2015 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2009-2011 Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -8,7 +8,7 @@ dnl reserved.
dnl Copyright (c) 2008-2009 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl Copyright (c) 2016-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2016 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -2,22 +2,22 @@ dnl
dnl Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
dnl University Research and Technology
dnl Corporation. All rights reserved.
dnl Copyright (c) 2004-2015 The University of Tennessee and The University
dnl Copyright (c) 2004-2018 The University of Tennessee and The University
dnl of Tennessee Research Foundation. All rights
dnl reserved.
dnl Copyright (c) 2004-2006 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2008-2015 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2008-2018 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2010 Oracle and/or its affiliates. All rights reserved.
dnl Copyright (c) 2015-2017 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl Copyright (c) 2014-2017 Los Alamos National Security, LLC. All rights
dnl Copyright (c) 2014-2018 Los Alamos National Security, LLC. All rights
dnl reserved.
dnl Copyright (c) 2017 Amazon.com, Inc. or its affiliates. All Rights
dnl reserved.
dnl Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2018 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow
@ -25,65 +25,262 @@ dnl
dnl $HEADER$
dnl
dnl This is a C test to see if 128-bit __atomic_compare_exchange_n()
dnl actually works (e.g., it compiles and links successfully on
dnl ARM64+clang, but returns incorrect answers as of August 2018).
AC_DEFUN([PMIX_ATOMIC_COMPARE_EXCHANGE_N_TEST_SOURCE],[[
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
typedef union {
uint64_t fake@<:@2@:>@;
__int128 real;
} pmix128;
static void test1(void)
{
// As of Aug 2018, we could not figure out a way to assign 128-bit
// constants -- the compilers would not accept it. So use a fake
// union to assign 2 uin64_t's to make a single __int128.
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 expected = { .fake = { 0x11EEDDCCBBAA0099, 0x88776655443322FF }};
pmix128 desired = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = __atomic_compare_exchange_n(&ptr.real, &expected.real,
desired.real, true,
__ATOMIC_RELAXED, __ATOMIC_RELAXED);
if ( !(r == false && ptr.real == expected.real)) {
exit(1);
}
}
static void test2(void)
{
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 expected = ptr;
pmix128 desired = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = __atomic_compare_exchange_n(&ptr.real, &expected.real,
desired.real, true,
__ATOMIC_RELAXED, __ATOMIC_RELAXED);
if (!(r == true && ptr.real == desired.real)) {
exit(2);
}
}
int main(int argc, char** argv)
{
test1();
test2();
return 0;
}
]])
dnl ------------------------------------------------------------------
dnl This is a C test to see if 128-bit __sync_bool_compare_and_swap()
dnl actually works (e.g., it compiles and links successfully on
dnl ARM64+clang, but returns incorrect answers as of August 2018).
AC_DEFUN([PMIX_SYNC_BOOL_COMPARE_AND_SWAP_TEST_SOURCE],[[
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
typedef union {
uint64_t fake@<:@2@:>@;
__int128 real;
} pmix128;
static void test1(void)
{
// As of Aug 2018, we could not figure out a way to assign 128-bit
// constants -- the compilers would not accept it. So use a fake
// union to assign 2 uin64_t's to make a single __int128.
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 oldval = { .fake = { 0x11EEDDCCBBAA0099, 0x88776655443322FF }};
pmix128 newval = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = __sync_bool_compare_and_swap(&ptr.real, oldval.real, newval.real);
if (!(r == false && ptr.real != newval.real)) {
exit(1);
}
}
static void test2(void)
{
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 oldval = ptr;
pmix128 newval = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = __sync_bool_compare_and_swap(&ptr.real, oldval.real, newval.real);
if (!(r == true && ptr.real == newval.real)) {
exit(2);
}
}
int main(int argc, char** argv)
{
test1();
test2();
return 0;
}
]])
dnl This is a C test to see if 128-bit __atomic_compare_exchange_n()
dnl actually works (e.g., it compiles and links successfully on
dnl ARM64+clang, but returns incorrect answers as of August 2018).
AC_DEFUN([PMIX_ATOMIC_COMPARE_EXCHANGE_STRONG_TEST_SOURCE],[[
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdatomic.h>
typedef union {
uint64_t fake@<:@2@:>@;
_Atomic __int128 real;
} pmix128;
static void test1(void)
{
// As of Aug 2018, we could not figure out a way to assign 128-bit
// constants -- the compilers would not accept it. So use a fake
// union to assign 2 uin64_t's to make a single __int128.
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 expected = { .fake = { 0x11EEDDCCBBAA0099, 0x88776655443322FF }};
pmix128 desired = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = atomic_compare_exchange_strong (&ptr.real, &expected.real,
desired.real, true,
atomic_relaxed, atomic_relaxed);
if ( !(r == false && ptr.real == expected.real)) {
exit(1);
}
}
static void test2(void)
{
pmix128 ptr = { .fake = { 0xFFEEDDCCBBAA0099, 0x8877665544332211 }};
pmix128 expected = ptr;
pmix128 desired = { .fake = { 0x1122DDCCBBAA0099, 0x887766554433EEFF }};
bool r = atomic_compare_exchange_strong (&ptr.real, &expected.real,
desired.real, true,
atomic_relaxed, atomic_relaxed);
if (!(r == true && ptr.real == desired.real)) {
exit(2);
}
}
int main(int argc, char** argv)
{
test1();
test2();
return 0;
}
]])
dnl ------------------------------------------------------------------
dnl
dnl Check to see if a specific function is linkable.
dnl
dnl Check with:
dnl 1. No compiler/linker flags.
dnl 2. CFLAGS += -mcx16
dnl 3. LIBS += -latomic
dnl 4. Finally, if it links ok with any of #1, #2, or #3, actually try
dnl to run the test code (if we're not cross-compiling) and verify
dnl that it actually gives us the correct result.
dnl
dnl Note that we unfortunately can't use AC SEARCH_LIBS because its
dnl check incorrectly fails (because these functions are special compiler
dnl intrinsics -- SEARCH_LIBS tries with "check FUNC()", which the
dnl compiler complains doesn't match the internal prototype). So we have
dnl to use our own LINK_IFELSE tests. Indeed, since these functions are
dnl so special, we actually need a valid source code that calls the
dnl functions with correct arguments, etc. It's not enough, for example,
dnl to do the usual "try to set a function pointer to the symbol" trick to
dnl determine if these functions are available, because the compiler may
dnl not implement these as actual symbols. So just try to link a real
dnl test code.
dnl
dnl $1: function name to print
dnl $2: program to test
dnl $3: action if any of 1, 2, or 3 succeeds
dnl #4: action if all of 1, 2, and 3 fail
dnl
AC_DEFUN([PMIX_ASM_CHECK_ATOMIC_FUNC],[
PMIX_VAR_SCOPE_PUSH([pmix_asm_check_func_happy pmix_asm_check_func_CFLAGS_save pmix_asm_check_func_LIBS_save])
pmix_asm_check_func_CFLAGS_save=$CFLAGS
pmix_asm_check_func_LIBS_save=$LIBS
dnl Check with no compiler/linker flags
AC_MSG_CHECKING([for $1])
AC_LINK_IFELSE([$2],
[pmix_asm_check_func_happy=1
AC_MSG_RESULT([yes])],
[pmix_asm_check_func_happy=0
AC_MSG_RESULT([no])])
dnl If that didn't work, try again with CFLAGS+=mcx16
AS_IF([test $pmix_asm_check_func_happy -eq 0],
[AC_MSG_CHECKING([for $1 with -mcx16])
CFLAGS="$CFLAGS -mcx16"
AC_LINK_IFELSE([$2],
[pmix_asm_check_func_happy=1
AC_MSG_RESULT([yes])],
[pmix_asm_check_func_happy=0
CFLAGS=$pmix_asm_check_func_CFLAGS_save
AC_MSG_RESULT([no])])
])
dnl If that didn't work, try again with LIBS+=-latomic
AS_IF([test $pmix_asm_check_func_happy -eq 0],
[AC_MSG_CHECKING([for $1 with -latomic])
LIBS="$LIBS -latomic"
AC_LINK_IFELSE([$2],
[pmix_asm_check_func_happy=1
AC_MSG_RESULT([yes])],
[pmix_asm_check_func_happy=0
LIBS=$pmix_asm_check_func_LIBS_save
AC_MSG_RESULT([no])])
])
dnl If we have it, try it and make sure it gives a correct result.
dnl As of Aug 2018, we know that it links but does *not* work on clang
dnl 6 on ARM64.
AS_IF([test $pmix_asm_check_func_happy -eq 1],
[AC_MSG_CHECKING([if $1() gives correct results])
AC_RUN_IFELSE([$2],
[AC_MSG_RESULT([yes])],
[pmix_asm_check_func_happy=0
AC_MSG_RESULT([no])],
[AC_MSG_RESULT([cannot test -- assume yes (cross compiling)])])
])
dnl If we were unsuccessful, restore CFLAGS/LIBS
AS_IF([test $pmix_asm_check_func_happy -eq 0],
[CFLAGS=$pmix_asm_check_func_CFLAGS_save
LIBS=$pmix_asm_check_func_LIBS_save])
dnl Run the user actions
AS_IF([test $pmix_asm_check_func_happy -eq 1], [$3], [$4])
PMIX_VAR_SCOPE_POP
])
dnl ------------------------------------------------------------------
AC_DEFUN([PMIX_CHECK_SYNC_BUILTIN_CSWAP_INT128], [
PMIX_VAR_SCOPE_PUSH([sync_bool_compare_and_swap_128_result])
PMIX_VAR_SCOPE_PUSH([sync_bool_compare_and_swap_128_result CFLAGS_save])
# Do we have __sync_bool_compare_and_swap?
# Use a special macro because we need to check with a few different
# CFLAGS/LIBS.
PMIX_ASM_CHECK_ATOMIC_FUNC([__sync_bool_compare_and_swap],
[AC_LANG_SOURCE(PMIX_SYNC_BOOL_COMPARE_AND_SWAP_TEST_SOURCE)],
[sync_bool_compare_and_swap_128_result=1],
[sync_bool_compare_and_swap_128_result=0])
AC_ARG_ENABLE([cross-cmpset128],[AC_HELP_STRING([--enable-cross-cmpset128],
[enable the use of the __sync builtin atomic compare-and-swap 128 when cross compiling])])
sync_bool_compare_and_swap_128_result=0
if test ! "$enable_cross_cmpset128" = "yes" ; then
AC_MSG_CHECKING([for processor support of __sync builtin atomic compare-and-swap on 128-bit values])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [__int128 x = 0; __sync_bool_compare_and_swap (&x, 0, 1);])],
[AC_MSG_RESULT([yes])
sync_bool_compare_and_swap_128_result=1],
[AC_MSG_RESULT([no])],
[AC_MSG_RESULT([no (cross compiling)])])
if test $sync_bool_compare_and_swap_128_result = 0 ; then
CFLAGS_save=$CFLAGS
CFLAGS="$CFLAGS -mcx16"
AC_MSG_CHECKING([for __sync builtin atomic compare-and-swap on 128-bit values with -mcx16 flag])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [__int128 x = 0; __sync_bool_compare_and_swap (&x, 0, 1);])],
[AC_MSG_RESULT([yes])
sync_bool_compare_and_swap_128_result=1
CFLAGS_save="$CFLAGS"],
[AC_MSG_RESULT([no])],
[AC_MSG_RESULT([no (cross compiling)])])
CFLAGS=$CFLAGS_save
fi
else
AC_MSG_CHECKING([for compiler support of __sync builtin atomic compare-and-swap on 128-bit values])
# Check if the compiler supports the __sync builtin
AC_TRY_LINK([], [__int128 x = 0; __sync_bool_compare_and_swap (&x, 0, 1);],
[AC_MSG_RESULT([yes])
sync_bool_compare_and_swap_128_result=1],
[AC_MSG_RESULT([no])])
if test $sync_bool_compare_and_swap_128_result = 0 ; then
CFLAGS_save=$CFLAGS
CFLAGS="$CFLAGS -mcx16"
AC_MSG_CHECKING([for __sync builtin atomic compare-and-swap on 128-bit values with -mcx16 flag])
AC_TRY_LINK([], [__int128 x = 0; __sync_bool_compare_and_swap (&x, 0, 1);],
[AC_MSG_RESULT([yes])
sync_bool_compare_and_swap_128_result=1
CFLAGS_save="$CFLAGS"],
[AC_MSG_RESULT([no])])
CFLAGS=$CFLAGS_save
fi
fi
AC_DEFINE_UNQUOTED([PMIX_HAVE_SYNC_BUILTIN_CSWAP_INT128], [$sync_bool_compare_and_swap_128_result],
[Whether the __sync builtin atomic compare and swap supports 128-bit values])
AC_DEFINE_UNQUOTED([PMIX_HAVE_SYNC_BUILTIN_CSWAP_INT128],
[$sync_bool_compare_and_swap_128_result],
[Whether the __sync builtin atomic compare and swap supports 128-bit values])
PMIX_VAR_SCOPE_POP
])
@ -112,7 +309,7 @@ __sync_add_and_fetch(&tmp, 1);],
pmix_asm_sync_have_64bit=0])
AC_DEFINE_UNQUOTED([PMIX_ASM_SYNC_HAVE_64BIT],[$pmix_asm_sync_have_64bit],
[Whether 64-bit is supported by the __sync builtin atomics])
[Whether 64-bit is supported by the __sync builtin atomics])
# Check for 128-bit support
PMIX_CHECK_SYNC_BUILTIN_CSWAP_INT128
@ -120,73 +317,110 @@ __sync_add_and_fetch(&tmp, 1);],
AC_DEFUN([PMIX_CHECK_GCC_BUILTIN_CSWAP_INT128], [
PMIX_VAR_SCOPE_PUSH([atomic_compare_exchange_n_128_result atomic_compare_exchange_n_128_CFLAGS_save atomic_compare_exchange_n_128_LIBS_save])
PMIX_VAR_SCOPE_PUSH([atomic_compare_exchange_n_128_result CFLAGS_save])
atomic_compare_exchange_n_128_CFLAGS_save=$CFLAGS
atomic_compare_exchange_n_128_LIBS_save=$LIBS
AC_ARG_ENABLE([cross-cmpset128],[AC_HELP_STRING([--enable-cross-cmpset128],
[enable the use of the __sync builtin atomic compare-and-swap 128 when cross compiling])])
# Do we have __sync_bool_compare_and_swap?
# Use a special macro because we need to check with a few different
# CFLAGS/LIBS.
PMIX_ASM_CHECK_ATOMIC_FUNC([__atomic_compare_exchange_n],
[AC_LANG_SOURCE(PMIX_ATOMIC_COMPARE_EXCHANGE_N_TEST_SOURCE)],
[atomic_compare_exchange_n_128_result=1],
[atomic_compare_exchange_n_128_result=0])
atomic_compare_exchange_n_128_result=0
if test ! "$enable_cross_cmpset128" = "yes" ; then
AC_MSG_CHECKING([for processor support of __atomic builtin atomic compare-and-swap on 128-bit values])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [__int128 x = 0, y = 0; __atomic_compare_exchange_n (&x, &y, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);])],
[AC_MSG_RESULT([yes])
atomic_compare_exchange_n_128_result=1],
[AC_MSG_RESULT([no])],
[AC_MSG_RESULT([no (cross compiling)])])
if test $atomic_compare_exchange_n_128_result = 0 ; then
CFLAGS_save=$CFLAGS
CFLAGS="$CFLAGS -mcx16"
AC_MSG_CHECKING([for __atomic builtin atomic compare-and-swap on 128-bit values with -mcx16 flag])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [__int128 x = 0, y = 0; __atomic_compare_exchange_n (&x, &y, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);])],
[AC_MSG_RESULT([yes])
atomic_compare_exchange_n_128_result=1
CFLAGS_save="$CFLAGS"],
[AC_MSG_RESULT([no])],
[AC_MSG_RESULT([no (cross compiling)])])
CFLAGS=$CFLAGS_save
fi
if test $atomic_compare_exchange_n_128_result = 1 ; then
AC_MSG_CHECKING([if __int128 atomic compare-and-swap is always lock-free])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [if (!__atomic_always_lock_free(16, 0)) { return 1; }])],
# If we have it and it works, check to make sure it is always lock
# free.
AS_IF([test $atomic_compare_exchange_n_128_result -eq 1],
[AC_MSG_CHECKING([if __int128 atomic compare-and-swap is always lock-free])
AC_RUN_IFELSE([AC_LANG_PROGRAM([], [if (!__atomic_always_lock_free(16, 0)) { return 1; }])],
[AC_MSG_RESULT([yes])],
[AC_MSG_RESULT([no])
PMIX_CHECK_SYNC_BUILTIN_CSWAP_INT128
atomic_compare_exchange_n_128_result=0],
[AC_MSG_RESULT([no (cross compiling)])])
fi
else
AC_MSG_CHECKING([for compiler support of __atomic builtin atomic compare-and-swap on 128-bit values])
[atomic_compare_exchange_n_128_result=0
# If this test fails, need to reset CFLAGS/LIBS (the
# above tests atomically set CFLAGS/LIBS or not; this
# test is running after the fact, so we have to undo
# the side-effects of setting CFLAGS/LIBS if the above
# tests passed).
CFLAGS=$atomic_compare_exchange_n_128_CFLAGS_save
LIBS=$atomic_compare_exchange_n_128_LIBS_save
AC_MSG_RESULT([no])],
[AC_MSG_RESULT([cannot test -- assume yes (cross compiling)])])
])
# Check if the compiler supports the __atomic builtin
AC_TRY_LINK([], [__int128 x = 0, y = 0; __atomic_compare_exchange_n (&x, &y, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);],
[AC_MSG_RESULT([yes])
atomic_compare_exchange_n_128_result=1],
[AC_MSG_RESULT([no])])
AC_DEFINE_UNQUOTED([PMIX_HAVE_GCC_BUILTIN_CSWAP_INT128],
[$atomic_compare_exchange_n_128_result],
[Whether the __atomic builtin atomic compare swap is both supported and lock-free on 128-bit values])
if test $atomic_compare_exchange_n_128_result = 0 ; then
CFLAGS_save=$CFLAGS
CFLAGS="$CFLAGS -mcx16"
dnl If we could not find decent support for 128-bits __atomic let's
dnl try the GCC _sync
AS_IF([test $atomic_compare_exchange_n_128_result -eq 0],
[PMIX_CHECK_SYNC_BUILTIN_CSWAP_INT128])
AC_MSG_CHECKING([for __atomic builtin atomic compare-and-swap on 128-bit values with -mcx16 flag])
AC_TRY_LINK([], [__int128 x = 0, y = 0; __atomic_compare_exchange_n (&x, &y, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);],
[AC_MSG_RESULT([yes])
atomic_compare_exchange_n_128_result=1
CFLAGS_save="$CFLAGS"],
[AC_MSG_RESULT([no])])
PMIX_VAR_SCOPE_POP
])
CFLAGS=$CFLAGS_save
fi
fi
AC_DEFUN([PMIX_CHECK_GCC_ATOMIC_BUILTINS], [
AC_MSG_CHECKING([for __atomic builtin atomics])
AC_DEFINE_UNQUOTED([PMIX_HAVE_GCC_BUILTIN_CSWAP_INT128], [$atomic_compare_exchange_n_128_result],
[Whether the __atomic builtin atomic compare and swap is lock-free on 128-bit values])
AC_TRY_LINK([
#include <stdint.h>
uint32_t tmp, old = 0;
uint64_t tmp64, old64 = 0;], [
__atomic_thread_fence(__ATOMIC_SEQ_CST);
__atomic_compare_exchange_n(&tmp, &old, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);
__atomic_add_fetch(&tmp, 1, __ATOMIC_RELAXED);
__atomic_compare_exchange_n(&tmp64, &old64, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);
__atomic_add_fetch(&tmp64, 1, __ATOMIC_RELAXED);],
[AC_MSG_RESULT([yes])
$1],
[AC_MSG_RESULT([no])
$2])
# Check for 128-bit support
PMIX_CHECK_GCC_BUILTIN_CSWAP_INT128
])
AC_DEFUN([PMIX_CHECK_C11_CSWAP_INT128], [
PMIX_VAR_SCOPE_PUSH([atomic_compare_exchange_result atomic_compare_exchange_CFLAGS_save atomic_compare_exchange_LIBS_save])
atomic_compare_exchange_CFLAGS_save=$CFLAGS
atomic_compare_exchange_LIBS_save=$LIBS
# Do we have C11 atomics on 128-bit integers?
# Use a special macro because we need to check with a few different
# CFLAGS/LIBS.
PMIX_ASM_CHECK_ATOMIC_FUNC([atomic_compare_exchange_strong_16],
[AC_LANG_SOURCE(PMIX_ATOMIC_COMPARE_EXCHANGE_STRONG_TEST_SOURCE)],
[atomic_compare_exchange_result=1],
[atomic_compare_exchange_result=0])
# If we have it and it works, check to make sure it is always lock
# free.
AS_IF([test $atomic_compare_exchange_result -eq 1],
[AC_MSG_CHECKING([if C11 __int128 atomic compare-and-swap is always lock-free])
AC_RUN_IFELSE([AC_LANG_PROGRAM([#include <stdatomic.h>], [_Atomic __int128_t x; if (!atomic_is_lock_free(&x)) { return 1; }])],
[AC_MSG_RESULT([yes])],
[atomic_compare_exchange_result=0
# If this test fails, need to reset CFLAGS/LIBS (the
# above tests atomically set CFLAGS/LIBS or not; this
# test is running after the fact, so we have to undo
# the side-effects of setting CFLAGS/LIBS if the above
# tests passed).
CFLAGS=$atomic_compare_exchange_CFLAGS_save
LIBS=$atomic_compare_exchange_LIBS_save
AC_MSG_RESULT([no])],
[AC_MSG_RESULT([cannot test -- assume yes (cross compiling)])])
])
AC_DEFINE_UNQUOTED([PMIX_HAVE_C11_CSWAP_INT128],
[$atomic_compare_exchange_result],
[Whether C11 atomic compare swap is both supported and lock-free on 128-bit values])
dnl If we could not find decent support for 128-bits atomic let's
dnl try the GCC _sync
AS_IF([test $atomic_compare_exchange_result -eq 0],
[PMIX_CHECK_SYNC_BUILTIN_CSWAP_INT128])
PMIX_VAR_SCOPE_POP
])
@ -533,7 +767,7 @@ dnl PMIX_CHECK_ASM_TYPE
dnl
dnl Sets PMIX_ASM_TYPE to the prefix for the function type to
dnl set a symbol's type as function (needed on ELF for shared
dnl libaries). If no .type directive is needed, sets PMIX_ASM_TYPE
dnl libraries). If no .type directive is needed, sets PMIX_ASM_TYPE
dnl to an empty string
dnl
dnl We look for @ \# %
@ -727,7 +961,7 @@ AC_DEFUN([PMIX_CHECK_SPARCV8PLUS],[
AC_MSG_CHECKING([if have Sparc v8+/v9 support])
sparc_result=0
PMIX_TRY_ASSEMBLE([$pmix_cv_asm_text
casa [%o0] 0x80, %o1, %o2],
casa [%o0] 0x80, %o1, %o2],
[sparc_result=1],
[sparc_result=0])
if test "$sparc_result" = "1" ; then
@ -746,35 +980,8 @@ dnl
dnl PMIX_CHECK_CMPXCHG16B
dnl
dnl #################################################################
AC_DEFUN([PMIX_CHECK_CMPXCHG16B],[
PMIX_VAR_SCOPE_PUSH([cmpxchg16b_result])
AC_ARG_ENABLE([cross-cmpxchg16b],[AC_HELP_STRING([--enable-cross-cmpxchg16b],
[enable the use of the cmpxchg16b instruction when cross compiling])])
if test ! "$enable_cross_cmpxchg16b" = "yes" ; then
AC_MSG_CHECKING([if processor supports x86_64 16-byte compare-and-exchange])
AC_RUN_IFELSE([AC_LANG_PROGRAM([[unsigned char tmp[16];]],[[
__asm__ __volatile__ ("lock cmpxchg16b (%%rsi)" : : "S" (tmp) : "memory", "cc");]])],
[AC_MSG_RESULT([yes])
cmpxchg16b_result=1],
[AC_MSG_RESULT([no])
cmpxchg16b_result=0],
[AC_MSG_RESULT([no (cross-compiling)])
cmpxchg16b_result=0])
else
AC_MSG_CHECKING([if assembler supports x86_64 16-byte compare-and-exchange])
PMIX_TRY_ASSEMBLE([$pmix_cv_asm_text
cmpxchg16b 0],
[AC_MSG_RESULT([yes])
cmpxchg16b_result=1],
[AC_MSG_RESULT([no])
cmpxchg16b_result=0])
fi
if test "$cmpxchg16b_result" = 1; then
AC_MSG_CHECKING([if compiler correctly handles volatile 128bits])
AC_RUN_IFELSE([AC_LANG_PROGRAM([#include <stdint.h>
AC_DEFUN([PMIX_CMPXCHG16B_TEST_SOURCE],[[
#include <stdint.h>
#include <assert.h>
union pmix_counted_pointer_t {
@ -788,8 +995,10 @@ union pmix_counted_pointer_t {
int128_t value;
#endif
};
typedef union pmix_counted_pointer_t pmix_counted_pointer_t;],
[volatile pmix_counted_pointer_t a;
typedef union pmix_counted_pointer_t pmix_counted_pointer_t;
int main(int argc, char* argv) {
volatile pmix_counted_pointer_t a;
pmix_counted_pointer_t b;
a.data.counter = 0;
@ -814,12 +1023,28 @@ typedef union pmix_counted_pointer_t pmix_counted_pointer_t;],
return (a.value != b.value);
#else
return 0;
#endif])],
[AC_MSG_RESULT([yes])],
[AC_MSG_RESULT([no])
cmpxchg16b_result=0],
[AC_MSG_RESULT([untested, assuming ok])])
fi
#endif
}
]])
AC_DEFUN([PMIX_CHECK_CMPXCHG16B],[
PMIX_VAR_SCOPE_PUSH([cmpxchg16b_result])
PMIX_ASM_CHECK_ATOMIC_FUNC([cmpxchg16b],
[AC_LANG_PROGRAM([[unsigned char tmp[16];]],
[[__asm__ __volatile__ ("lock cmpxchg16b (%%rsi)" : : "S" (tmp) : "memory", "cc");]])],
[cmpxchg16b_result=1],
[cmpxchg16b_result=0])
# If we have it, make sure it works.
AS_IF([test $cmpxchg16b_result -eq 1],
[AC_MSG_CHECKING([if cmpxchg16b_result works])
AC_RUN_IFELSE([AC_LANG_SOURCE(PMIX_CMPXCHG16B_TEST_SOURCE)],
[AC_MSG_RESULT([yes])],
[cmpxchg16b_result=0
AC_MSG_RESULT([no])],
[AC_MSG_RESULT([cannot test -- assume yes (cross compiling)])])
])
AC_DEFINE_UNQUOTED([PMIX_HAVE_CMPXCHG16B], [$cmpxchg16b_result],
[Whether the processor supports the cmpxchg16b instruction])
PMIX_VAR_SCOPE_POP
@ -832,7 +1057,7 @@ dnl
dnl Check if the compiler is capable of doing GCC-style inline
dnl assembly. Some compilers emit a warning and ignore the inline
dnl assembly (xlc on OS X) and compile without error. Therefore,
dnl the test attempts to run the emited code to check that the
dnl the test attempts to run the emitted code to check that the
dnl assembly is actually run. To run this test, one argument to
dnl the macro must be an assembly instruction in gcc format to move
dnl the value 0 into the register containing the variable ret.
@ -885,7 +1110,7 @@ return ret;
if test "$asm_result" = "yes" ; then
PMIX_C_GCC_INLINE_ASSEMBLY=1
pmix_cv_asm_inline_supported="yes"
pmix_cv_asm_inline_supported="yes"
else
PMIX_C_GCC_INLINE_ASSEMBLY=0
fi
@ -912,18 +1137,30 @@ AC_DEFUN([PMIX_CONFIG_ASM],[
AC_REQUIRE([PMIX_SETUP_CC])
AC_REQUIRE([AM_PROG_AS])
AC_ARG_ENABLE([c11-atomics],[AC_HELP_STRING([--enable-c11-atomics],
[Enable use of C11 atomics if available (default: enabled)])])
AC_ARG_ENABLE([builtin-atomics],
[AC_HELP_STRING([--enable-builtin-atomics],
[Enable use of __sync builtin atomics (default: enabled)])],
[], [enable_builtin_atomics="yes"])
[Enable use of __sync builtin atomics (default: disabled)])])
pmix_cv_asm_builtin="BUILTIN_NO"
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" != "no"],
[PMIX_CHECK_GCC_ATOMIC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_GCC"], [])])
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" != "no"],
[PMIX_CHECK_SYNC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_SYNC"], [])])
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" = "yes"],
[AC_MSG_WARN([__sync builtin atomics requested but not found - proceeding with inline atomics])])
PMIX_CHECK_C11_CSWAP_INT128
if test "x$enable_c11_atomics" != "xno" && test "$pmix_cv_c11_supported" = "yes" ; then
pmix_cv_asm_builtin="BUILTIN_C11"
PMIX_CHECK_C11_CSWAP_INT128
elif test "x$enable_c11_atomics" = "xyes"; then
AC_MSG_WARN([C11 atomics were requested but are not supported])
AC_MSG_ERROR([Cannot continue])
else
pmix_cv_asm_builtin="BUILTIN_NO"
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" = "yes"],
[PMIX_CHECK_GCC_ATOMIC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_GCC"], [])])
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" = "yes"],
[PMIX_CHECK_SYNC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_SYNC"], [])])
AS_IF([test "$pmix_cv_asm_builtin" = "BUILTIN_NO" && test "$enable_builtin_atomics" = "yes"],
[AC_MSG_ERROR([__sync builtin atomics requested but not found.])])
fi
PMIX_CHECK_ASM_PROC
PMIX_CHECK_ASM_TEXT
@ -960,9 +1197,9 @@ AC_DEFUN([PMIX_CONFIG_ASM],[
ia64-*)
pmix_cv_asm_arch="IA64"
PMIX_CHECK_SYNC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_SYNC"],
[AC_MSG_ERROR([No atomic primitives available for $host])])
[AC_MSG_ERROR([No atomic primitives available for $host])])
;;
aarch64*)
aarch64*)
pmix_cv_asm_arch="ARM64"
PMIX_ASM_SUPPORT_64BIT=1
PMIX_ASM_ARM_VERSION=8
@ -994,7 +1231,7 @@ AC_DEFUN([PMIX_CONFIG_ASM],[
# uses Linux kernel helpers for some atomic operations
pmix_cv_asm_arch="ARM"
PMIX_CHECK_SYNC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_SYNC"],
[AC_MSG_ERROR([No atomic primitives available for $host])])
[AC_MSG_ERROR([No atomic primitives available for $host])])
;;
mips-*|mips64*)
@ -1002,7 +1239,7 @@ AC_DEFUN([PMIX_CONFIG_ASM],[
# a MIPS III machine (r4000 and later)
pmix_cv_asm_arch="MIPS"
PMIX_CHECK_SYNC_BUILTINS([pmix_cv_asm_builtin="BUILTIN_SYNC"],
[AC_MSG_ERROR([No atomic primitives available for $host])])
[AC_MSG_ERROR([No atomic primitives available for $host])])
;;
powerpc-*|powerpc64-*|powerpcle-*|powerpc64le-*|rs6000-*|ppc-*)
@ -1070,11 +1307,11 @@ AC_MSG_ERROR([Can not continue.])
;;
esac
if test "x$PMIX_ASM_SUPPORT_64BIT" = "x1" && test "$pmix_cv_asm_builtin" = "BUILTIN_SYNC" &&
test "$pmix_asm_sync_have_64bit" = "0" ; then
# __sync builtins exist but do not implement 64-bit support. Fall back on inline asm.
pmix_cv_asm_builtin="BUILTIN_NO"
fi
if test "x$PMIX_ASM_SUPPORT_64BIT" = "x1" && test "$pmix_cv_asm_builtin" = "BUILTIN_SYNC" &&
test "$pmix_asm_sync_have_64bit" = "0" ; then
# __sync builtins exist but do not implement 64-bit support. Fall back on inline asm.
pmix_cv_asm_builtin="BUILTIN_NO"
fi
if test "$pmix_cv_asm_builtin" = "BUILTIN_SYNC" || test "$pmix_cv_asm_builtin" = "BUILTIN_GCC" ; then
AC_DEFINE([PMIX_C_GCC_INLINE_ASSEMBLY], [1],
@ -1097,7 +1334,7 @@ AC_MSG_ERROR([Can not continue.])
;;
esac
pmix_cv_asm_inline_supported="no"
pmix_cv_asm_inline_supported="no"
# now that we know our architecture, try to inline assemble
PMIX_CHECK_INLINE_C_GCC([$PMIX_GCC_INLINE_ASSIGN])

Просмотреть файл

@ -10,7 +10,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2012 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2017 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2016 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -11,7 +11,7 @@ dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2012-2016 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -10,7 +10,7 @@ dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2016 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Cisco Systems, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl

Просмотреть файл

@ -11,7 +11,7 @@ dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2010 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2009-2011 Oak Ridge National Labs. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2017 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -12,7 +12,7 @@ dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2007-2009 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2008-2015 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2013 Intel, Inc. All rights reserved
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -13,7 +13,7 @@ dnl All rights reserved.
dnl Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
dnl Copyright (c) 2009 Oak Ridge National Labs. All rights reserved.
dnl Copyright (c) 2009-2016 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
dnl Copyright (c) 2017 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl

Просмотреть файл

@ -11,7 +11,7 @@
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2008-2015 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2015-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2015 Intel, Inc. All rights reserved
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -10,7 +10,7 @@ dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2015 Intel, Inc. All rights reserved.
dnl Copyright (c) 2015 Research Organization for Information Science
dnl and Technology (RIST). All rights reserved.
dnl $COPYRIGHT$

Просмотреть файл

@ -2,7 +2,6 @@
#
# Copyright (c) 2010 Sandia National Laboratories. All rights reserved.
#
# Copyright (c) 2018 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -1,7 +1,7 @@
dnl -*- shell-script -*-
dnl
dnl Copyright (c) 2013-2014 Cisco Systems, Inc. All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -1,7 +1,7 @@
# -*- shell-script -*-
#
# Copyright (c) 2014 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2014-2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -2,7 +2,7 @@
#
# Copyright (c) 2009-2015 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2013 Los Alamos National Security, LLC. All rights reserved.
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -9,7 +9,7 @@ dnl Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
dnl University of Stuttgart. All rights reserved.
dnl Copyright (c) 2004-2005 The Regents of the University of California.
dnl All rights reserved.
dnl Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
dnl Copyright (c) 2014-2017 Intel, Inc. All rights reserved.
dnl $COPYRIGHT$
dnl
dnl Additional copyrights may follow

Просмотреть файл

@ -14,7 +14,7 @@
# Copyright (c) 2010-2011 Oak Ridge National Labs. All rights reserved.
# Copyright (c) 2013-2016 Los Alamos National Security, Inc. All rights
# reserved.
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2016 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -10,8 +10,8 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2008-2013 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2015-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2008-2018 Cisco Systems, Inc. All rights reserved
# Copyright (c) 2015 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
@ -270,7 +270,7 @@ make_tarball() {
#
echo "*** Running autogen $autogen_args..."
rm -f success
(./autogen.sh $autogen_args 2>&1 && touch success) | tee auto.out
(./autogen.pl $autogen_args 2>&1 && touch success) | tee auto.out
if test ! -f success; then
echo "Autogen failed. Aborting"
exit 1

Просмотреть файл

@ -10,7 +10,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2016 Intel, Inc. All rights reserved
# Copyright (c) 2016 Cisco Systems, Inc. All rights reserved.
# $COPYRIGHT$
#

Просмотреть файл

@ -1,7 +1,6 @@
/*
* Copyright (c) 2016 Mellanox Technologies, Inc.
* All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -1,7 +1,6 @@
/*
* Copyright (c) 2016 Mellanox Technologies, Inc.
* All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -3,7 +3,7 @@
*
* Copyright (c) 2013 Mellanox Technologies, Inc.
* All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2016 Intel, Inc. All rights reserved.
* $COPYRIGHT$
* Additional copyrights may follow
*

Просмотреть файл

@ -3,7 +3,7 @@
*
* Copyright (c) 2013 Mellanox Technologies, Inc.
* All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2016 Intel, Inc. All rights reserved.
* $COPYRIGHT$
* Additional copyrights may follow
*

Просмотреть файл

@ -2,7 +2,7 @@
/*
* Copyright (c) 2012-2015 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2016 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2016 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2016 Mellanox Technologies, Inc.

Просмотреть файл

@ -1,6 +1,6 @@
/*
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* Copyright (c) 2016-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2016 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -1,6 +1,6 @@
/*
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* Copyright (c) 2016-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2016 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

14
opal/mca/pmix/pmix4x/pmix/contrib/pmix_jenkins.sh Исполняемый файл → Обычный файл
Просмотреть файл

@ -195,19 +195,13 @@ if [ "$jenkins_test_build" = "yes" ]; then
tar zxf libevent-2.0.22-stable.tar.gz
cd libevent-2.0.22-stable
libevent_dir=$PWD/install
./autogen.sh && ./configure --prefix=$libevent_dir && make && make install
./autogen.pl && ./configure --prefix=$libevent_dir && make && make install
cd $WORKSPACE
if [ -x "autogen.sh" ]; then
autogen_script=./autogen.sh
else
autogen_script=./autogen.pl
fi
configure_args="--with-libevent=$libevent_dir"
# build pmix
$autogen_script
./autogen.pl
echo ./configure --prefix=$pmix_dir $configure_args | bash -xeE
make $make_opt install
jenkins_build_passed=1
@ -270,7 +264,7 @@ if [ "$jenkins_test_src_rpm" = "yes" ]; then
# check distclean
make $make_opt distclean
$autogen_script
./autogen.pl
echo ./configure --prefix=$pmix_dir $configure_args | bash -xeE || exit 11
if [ -x /usr/bin/dpkg-buildpackage ]; then
@ -316,7 +310,7 @@ if [ -n "$JENKINS_RUN_TESTS" -a "$JENKINS_RUN_TESTS" -ne "0" ]; then
rm -rf $run_tap
# build pmix
$autogen_script
./autogen.pl
echo ./configure --prefix=$pmix_dir $configure_args --disable-visibility | bash -xeE
make $make_opt install

2
opal/mca/pmix/pmix4x/pmix/contrib/update-my-copyright.pl Обычный файл → Исполняемый файл
Просмотреть файл

@ -1,7 +1,7 @@
#!/usr/bin/env perl
#
# Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2016-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2016 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#

2
opal/mca/pmix/pmix4x/pmix/contrib/whitespace-purge.sh Обычный файл → Исполняемый файл
Просмотреть файл

@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright (c) 2015-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2015 Intel, Inc. All rights reserved.
# Copyright (c) 2015 Los Alamos National Security, LLC. All rights
# reserved
# Copyright (c) 2015 Cisco Systems, Inc.

Просмотреть файл

@ -10,7 +10,7 @@
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2008 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -21,7 +21,7 @@
AM_CPPFLAGS = -I$(top_builddir)/src -I$(top_builddir)/src/include -I$(top_builddir)/include -I$(top_builddir)/include/pmix
noinst_PROGRAMS = client client2 dmodex dynamic fault pub pubi tool debugger debuggerd alloc jctrl
noinst_PROGRAMS = client client2 dmodex dynamic fault pub pubi tool debugger debuggerd alloc jctrl group asyncgroup
if !WANT_HIDDEN
# these examples use internal symbols
# use --disable-visibility
@ -76,6 +76,14 @@ tool_SOURCES = tool.c
tool_LDFLAGS = $(PMIX_PKG_CONFIG_LDFLAGS)
tool_LDADD = $(top_builddir)/src/libpmix.la
group_SOURCES = group.c
group_LDFLAGS = $(PMIX_PKG_CONFIG_LDFLAGS)
group_LDADD = $(top_builddir)/src/libpmix.la
asyncgroup_SOURCES = asyncgroup.c
asyncgroup_LDFLAGS = $(PMIX_PKG_CONFIG_LDFLAGS)
asyncgroup_LDADD = $(top_builddir)/src/libpmix.la
if !WANT_HIDDEN
server_SOURCES = server.c
server_LDFLAGS = $(PMIX_PKG_CONFIG_LDFLAGS)

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -0,0 +1,310 @@
/*
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2004-2011 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2006-2013 Los Alamos National Security, LLC.
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*
*/
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include <pthread.h>
#include <pmix.h>
typedef struct {
pthread_mutex_t mutex;
pthread_cond_t cond;
volatile bool active;
pmix_status_t status;
} mylock_t;
#define DEBUG_CONSTRUCT_LOCK(l) \
do { \
pthread_mutex_init(&(l)->mutex, NULL); \
pthread_cond_init(&(l)->cond, NULL); \
(l)->active = true; \
(l)->status = PMIX_SUCCESS; \
} while(0)
#define DEBUG_DESTRUCT_LOCK(l) \
do { \
pthread_mutex_destroy(&(l)->mutex); \
pthread_cond_destroy(&(l)->cond); \
} while(0)
#define DEBUG_WAIT_THREAD(lck) \
do { \
pthread_mutex_lock(&(lck)->mutex); \
while ((lck)->active) { \
pthread_cond_wait(&(lck)->cond, &(lck)->mutex); \
} \
pthread_mutex_unlock(&(lck)->mutex); \
} while(0)
#define DEBUG_WAKEUP_THREAD(lck) \
do { \
pthread_mutex_lock(&(lck)->mutex); \
(lck)->active = false; \
pthread_cond_broadcast(&(lck)->cond); \
pthread_mutex_unlock(&(lck)->mutex); \
} while(0)
static pmix_proc_t myproc;
static mylock_t invitedlock;
static void notification_fn(size_t evhdlr_registration_id,
pmix_status_t status,
const pmix_proc_t *source,
pmix_info_t info[], size_t ninfo,
pmix_info_t results[], size_t nresults,
pmix_event_notification_cbfunc_fn_t cbfunc,
void *cbdata)
{
fprintf(stderr, "Client %s:%d NOTIFIED with status %d\n", myproc.nspace, myproc.rank, status);
if (NULL != cbfunc) {
cbfunc(PMIX_EVENT_ACTION_COMPLETE, NULL, 0, NULL, NULL, cbdata);
}
}
static void op_callbk(pmix_status_t status,
void *cbdata)
{
mylock_t *lock = (mylock_t*)cbdata;
lock->status = status;
DEBUG_WAKEUP_THREAD(lock);
}
static void errhandler_reg_callbk(pmix_status_t status,
size_t errhandler_ref,
void *cbdata)
{
mylock_t *lock = (mylock_t*)cbdata;
lock->status = status;
DEBUG_WAKEUP_THREAD(lock);
}
static void grpcomplete(pmix_status_t status,
pmix_info_t *info, size_t ninfo,
void *cbdata,
pmix_release_cbfunc_t release_fn,
void *release_cbdata)
{
fprintf(stderr, "%s:%d GRPCOMPLETE\n", myproc.nspace, myproc.rank);
DEBUG_WAKEUP_THREAD(&invitedlock);
}
static void invitefn(size_t evhdlr_registration_id,
pmix_status_t status,
const pmix_proc_t *source,
pmix_info_t info[], size_t ninfo,
pmix_info_t results[], size_t nresults,
pmix_event_notification_cbfunc_fn_t cbfunc,
void *cbdata)
{
size_t n;
char *grp;
pmix_status_t rc;
/* if I am the leader, I can ignore this event */
if (PMIX_CHECK_PROCID(source, &myproc)) {
fprintf(stderr, "%s:%d INVITED, BUT LEADER\n", myproc.nspace, myproc.rank);
/* mark the event chain as complete */
if (NULL != cbfunc) {
cbfunc(PMIX_EVENT_ACTION_COMPLETE, NULL, 0, NULL, NULL, cbdata);
}
return;
}
/* search for grp id */
for (n=0; n < ninfo; n++) {
if (PMIX_CHECK_KEY(&info[n], PMIX_GROUP_ID)) {
grp = info[n].value.data.string;
break;
}
}
fprintf(stderr, "Client %s:%d INVITED by source %s:%d\n",
myproc.nspace, myproc.rank,
source->nspace, source->rank);
invitedlock.status = status;
fprintf(stderr, "%s:%d ACCEPTING INVITE\n", myproc.nspace, myproc.rank);
rc = PMIx_Group_join_nb(grp, source, PMIX_GROUP_ACCEPT, NULL, 0, grpcomplete, NULL);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "%s:%d Error in Group_join_nb: %sn", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
}
/* mark the event chain as complete */
if (NULL != cbfunc) {
cbfunc(PMIX_EVENT_ACTION_COMPLETE, NULL, 0, NULL, NULL, cbdata);
}
}
int main(int argc, char **argv)
{
int rc;
pmix_value_t value;
pmix_value_t *val = &value;
pmix_proc_t proc, *procs;
uint32_t nprocs;
mylock_t lock;
pmix_status_t code;
pmix_info_t *results;
size_t nresults;
/* init us */
if (PMIX_SUCCESS != (rc = PMIx_Init(&myproc, NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Init failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
exit(0);
}
fprintf(stderr, "[%d] Client ns %s rank %d: Running\n", (int)getpid(), myproc.nspace, myproc.rank);
DEBUG_CONSTRUCT_LOCK(&invitedlock);
PMIX_PROC_CONSTRUCT(&proc);
(void)strncpy(proc.nspace, myproc.nspace, PMIX_MAX_NSLEN);
proc.rank = PMIX_RANK_WILDCARD;
/* get our universe size */
if (PMIX_SUCCESS != (rc = PMIx_Get(&proc, PMIX_UNIV_SIZE, NULL, 0, &val))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Get universe size failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
nprocs = val->data.uint32;
PMIX_VALUE_RELEASE(val);
if (nprocs < 4) {
if (0 == myproc.rank) {
fprintf(stderr, "This example requires a minimum of 4 processes\n");
}
goto done;
}
fprintf(stderr, "Client %s:%d universe size %d\n", myproc.nspace, myproc.rank, nprocs);
/* register our default errhandler */
DEBUG_CONSTRUCT_LOCK(&lock);
PMIx_Register_event_handler(NULL, 0, NULL, 0,
notification_fn, errhandler_reg_callbk, (void*)&lock);
DEBUG_WAIT_THREAD(&lock);
rc = lock.status;
DEBUG_DESTRUCT_LOCK(&lock);
if (PMIX_SUCCESS != rc) {
goto done;
}
/* we need to register handlers for invitations */
DEBUG_CONSTRUCT_LOCK(&lock);
code = PMIX_GROUP_INVITED;
PMIx_Register_event_handler(&code, 1, NULL, 0,
invitefn, errhandler_reg_callbk, (void*)&lock);
DEBUG_WAIT_THREAD(&lock);
rc = lock.status;
DEBUG_DESTRUCT_LOCK(&lock);
if (PMIX_SUCCESS != rc) {
goto done;
}
/* call fence to sync */
PMIX_PROC_CONSTRUCT(&proc);
(void)strncpy(proc.nspace, myproc.nspace, PMIX_MAX_NSLEN);
proc.rank = PMIX_RANK_WILDCARD;
if (PMIX_SUCCESS != (rc = PMIx_Fence(&proc, 1, NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Fence failed: %d\n", myproc.nspace, myproc.rank, rc);
goto done;
}
/* rank=0 constructs a new group */
if (0 == myproc.rank) {
fprintf(stderr, "%d executing Group_invite\n", myproc.rank);
nprocs = 3;
PMIX_PROC_CREATE(procs, nprocs);
PMIX_PROC_LOAD(&procs[0], myproc.nspace, 0);
PMIX_PROC_LOAD(&procs[1], myproc.nspace, 2);
PMIX_PROC_LOAD(&procs[2], myproc.nspace, 3);
rc = PMIx_Group_invite("ourgroup", procs, nprocs, NULL, 0, &results, &nresults);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Group_invite failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
PMIX_PROC_FREE(procs, nprocs);
fprintf(stderr, "%s:%d Execute fence across group\n", myproc.nspace, myproc.rank);
PMIX_PROC_LOAD(&proc, "ourgroup", PMIX_RANK_WILDCARD);
rc = PMIx_Fence(&proc, 1, NULL, 0);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Fence across group failed: %d\n", myproc.nspace, myproc.rank, rc);
goto done;
}
fprintf(stderr, "%d executing Group_destruct\n", myproc.rank);
rc = PMIx_Group_destruct("ourgroup", NULL, 0);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Group_destruct failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
} else if (2 == myproc.rank || 3 == myproc.rank) {
/* wait to be invited */
fprintf(stderr, "%s:%d waiting to be invited\n", myproc.nspace, myproc.rank);
DEBUG_WAIT_THREAD(&invitedlock);
DEBUG_DESTRUCT_LOCK(&invitedlock);
fprintf(stderr, "%s:%d Execute fence across group\n", myproc.nspace, myproc.rank);
PMIX_PROC_LOAD(&proc, "ourgroup", PMIX_RANK_WILDCARD);
rc = PMIx_Fence(&proc, 1, NULL, 0);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Fence across group failed: %d\n", myproc.nspace, myproc.rank, rc);
goto done;
}
fprintf(stderr, "%d executing Group_destruct\n", myproc.rank);
rc = PMIx_Group_destruct("ourgroup", NULL, 0);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Group_destruct failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
}
/* call fence to sync */
PMIX_PROC_CONSTRUCT(&proc);
(void)strncpy(proc.nspace, myproc.nspace, PMIX_MAX_NSLEN);
proc.rank = PMIX_RANK_WILDCARD;
if (PMIX_SUCCESS != (rc = PMIx_Fence(&proc, 1, NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Fence failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
done:
/* finalize us */
DEBUG_CONSTRUCT_LOCK(&lock);
PMIx_Deregister_event_handler(1, op_callbk, &lock);
DEBUG_WAIT_THREAD(&lock);
DEBUG_DESTRUCT_LOCK(&lock);
fprintf(stderr, "Client ns %s rank %d: Finalizing\n", myproc.nspace, myproc.rank);
if (PMIX_SUCCESS != (rc = PMIx_Finalize(NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d:PMIx_Finalize failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
} else {
fprintf(stderr, "Client ns %s rank %d:PMIx_Finalize successfully completed\n", myproc.nspace, myproc.rank);
}
fprintf(stderr, "%s:%d COMPLETE\n", myproc.nspace, myproc.rank);
fflush(stderr);
return(0);
}

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -41,7 +41,6 @@ typedef struct {
} myquery_data_t;
static volatile bool waiting_for_debugger = true;
static pmix_proc_t myproc;
/* this is a callback function for the PMIx_Query

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2016 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2016 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

214
opal/mca/pmix/pmix4x/pmix/examples/group.c Обычный файл
Просмотреть файл

@ -0,0 +1,214 @@
/*
* Copyright (c) 2004-2010 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2004-2011 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2006-2013 Los Alamos National Security, LLC.
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*
*/
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include <pthread.h>
#include <pmix.h>
typedef struct {
pthread_mutex_t mutex;
pthread_cond_t cond;
volatile bool active;
pmix_status_t status;
} mylock_t;
#define DEBUG_CONSTRUCT_LOCK(l) \
do { \
pthread_mutex_init(&(l)->mutex, NULL); \
pthread_cond_init(&(l)->cond, NULL); \
(l)->active = true; \
(l)->status = PMIX_SUCCESS; \
} while(0)
#define DEBUG_DESTRUCT_LOCK(l) \
do { \
pthread_mutex_destroy(&(l)->mutex); \
pthread_cond_destroy(&(l)->cond); \
} while(0)
#define DEBUG_WAIT_THREAD(lck) \
do { \
pthread_mutex_lock(&(lck)->mutex); \
while ((lck)->active) { \
pthread_cond_wait(&(lck)->cond, &(lck)->mutex); \
} \
pthread_mutex_unlock(&(lck)->mutex); \
} while(0)
#define DEBUG_WAKEUP_THREAD(lck) \
do { \
pthread_mutex_lock(&(lck)->mutex); \
(lck)->active = false; \
pthread_cond_broadcast(&(lck)->cond); \
pthread_mutex_unlock(&(lck)->mutex); \
} while(0)
static pmix_proc_t myproc;
static void notification_fn(size_t evhdlr_registration_id,
pmix_status_t status,
const pmix_proc_t *source,
pmix_info_t info[], size_t ninfo,
pmix_info_t results[], size_t nresults,
pmix_event_notification_cbfunc_fn_t cbfunc,
void *cbdata)
{
fprintf(stderr, "Client %s:%d NOTIFIED with status %d\n", myproc.nspace, myproc.rank, status);
}
static void op_callbk(pmix_status_t status,
void *cbdata)
{
mylock_t *lock = (mylock_t*)cbdata;
fprintf(stderr, "Client %s:%d OP CALLBACK CALLED WITH STATUS %d\n", myproc.nspace, myproc.rank, status);
lock->status = status;
DEBUG_WAKEUP_THREAD(lock);
}
static void errhandler_reg_callbk(pmix_status_t status,
size_t errhandler_ref,
void *cbdata)
{
mylock_t *lock = (mylock_t*)cbdata;
fprintf(stderr, "Client %s:%d ERRHANDLER REGISTRATION CALLBACK CALLED WITH STATUS %d, ref=%lu\n",
myproc.nspace, myproc.rank, status, (unsigned long)errhandler_ref);
lock->status = status;
DEBUG_WAKEUP_THREAD(lock);
}
int main(int argc, char **argv)
{
int rc;
pmix_value_t value;
pmix_value_t *val = &value;
pmix_proc_t proc, *procs;
uint32_t nprocs;
mylock_t lock;
pmix_info_t *results, info;
size_t nresults, cid;
/* init us */
if (PMIX_SUCCESS != (rc = PMIx_Init(&myproc, NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Init failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
exit(0);
}
fprintf(stderr, "Client ns %s rank %d: Running\n", myproc.nspace, myproc.rank);
PMIX_PROC_CONSTRUCT(&proc);
(void)strncpy(proc.nspace, myproc.nspace, PMIX_MAX_NSLEN);
proc.rank = PMIX_RANK_WILDCARD;
/* get our universe size */
if (PMIX_SUCCESS != (rc = PMIx_Get(&proc, PMIX_UNIV_SIZE, NULL, 0, &val))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Get universe size failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
nprocs = val->data.uint32;
PMIX_VALUE_RELEASE(val);
if (nprocs < 4) {
if (0 == myproc.rank) {
fprintf(stderr, "This example requires a minimum of 4 processes\n");
}
goto done;
}
fprintf(stderr, "Client %s:%d universe size %d\n", myproc.nspace, myproc.rank, nprocs);
/* register our default errhandler */
DEBUG_CONSTRUCT_LOCK(&lock);
PMIx_Register_event_handler(NULL, 0, NULL, 0,
notification_fn, errhandler_reg_callbk, (void*)&lock);
DEBUG_WAIT_THREAD(&lock);
rc = lock.status;
DEBUG_DESTRUCT_LOCK(&lock);
if (PMIX_SUCCESS != rc) {
goto done;
}
/* call fence to sync */
PMIX_PROC_CONSTRUCT(&proc);
(void)strncpy(proc.nspace, myproc.nspace, PMIX_MAX_NSLEN);
proc.rank = PMIX_RANK_WILDCARD;
if (PMIX_SUCCESS != (rc = PMIx_Fence(&proc, 1, NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Fence failed: %d\n", myproc.nspace, myproc.rank, rc);
goto done;
}
/* rank=0,2,3 construct a new group */
if (0 == myproc.rank || 2 == myproc.rank || 3 == myproc.rank) {
fprintf(stderr, "%d executing Group_construct\n", myproc.rank);
nprocs = 3;
PMIX_PROC_CREATE(procs, nprocs);
PMIX_PROC_LOAD(&procs[0], myproc.nspace, 0);
PMIX_PROC_LOAD(&procs[1], myproc.nspace, 2);
PMIX_PROC_LOAD(&procs[2], myproc.nspace, 3);
PMIX_INFO_LOAD(&info, PMIX_GROUP_ASSIGN_CONTEXT_ID, NULL, PMIX_BOOL);
rc = PMIx_Group_construct("ourgroup", procs, nprocs, &info, 1, &results, &nresults);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Group_construct failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
/* we should have a single results object */
if (NULL != results) {
cid = 0;
PMIX_VALUE_GET_NUMBER(rc, &results[0].value, cid, size_t);
fprintf(stderr, "%d Group construct complete with status %s KEY %s CID %d\n",
myproc.rank, PMIx_Error_string(rc), results[0].key, (int)cid);
} else {
fprintf(stderr, "%d Group construct complete, but no CID returned\n", myproc.rank);
}
PMIX_PROC_FREE(procs, nprocs);
fprintf(stderr, "%d executing Group_destruct\n", myproc.rank);
rc = PMIx_Group_destruct("ourgroup", NULL, 0);
if (PMIX_SUCCESS != rc) {
fprintf(stderr, "Client ns %s rank %d: PMIx_Group_destruct failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
goto done;
}
}
done:
/* finalize us */
DEBUG_CONSTRUCT_LOCK(&lock);
PMIx_Deregister_event_handler(1, op_callbk, &lock);
DEBUG_WAIT_THREAD(&lock);
DEBUG_DESTRUCT_LOCK(&lock);
fprintf(stderr, "Client ns %s rank %d: Finalizing\n", myproc.nspace, myproc.rank);
if (PMIX_SUCCESS != (rc = PMIx_Finalize(NULL, 0))) {
fprintf(stderr, "Client ns %s rank %d:PMIx_Finalize failed: %s\n", myproc.nspace, myproc.rank, PMIx_Error_string(rc));
} else {
fprintf(stderr, "Client ns %s rank %d:PMIx_Finalize successfully completed\n", myproc.nspace, myproc.rank);
}
fprintf(stderr, "%s:%d COMPLETE\n", myproc.nspace, myproc.rank);
fflush(stderr);
return(0);
}

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -13,7 +13,7 @@
* All rights reserved.
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2011 Oak Ridge National Labs. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2016 Intel, Inc. All rights reserved.
* Copyright (c) 2015 Mellanox Technologies, Inc. All rights reserved.
* $COPYRIGHT$
*

Просмотреть файл

@ -721,6 +721,244 @@ PMIX_EXPORT pmix_status_t PMIx_IOF_push(const pmix_proc_t targets[], size_t ntar
const pmix_info_t directives[], size_t ndirs,
pmix_op_cbfunc_t cbfunc, void *cbdata);
/* Construct a new group composed of the specified processes and identified with
* the provided group identifier. Both blocking and non-blocking versions
* are provided (the callback function for the non-blocking form will be called
* once all specified processes have joined the group). The group identifier is
* a user-defined, NULL-terminated character array of length less than or equal
* to PMIX_MAX_NSLEN. Only characters accepted by standard string comparison
* functions (e.g., strncmp) are supported.
*
* Processes may engage in multiple simultaneous group construct operations as
* desired so long as each is provided with a unique group ID. The info array
* can be used to pass user-level directives regarding timeout constraints and
* other options available from the PMIx server.
*
* The construct leader (if PMIX_GROUP_LEADER is provided) or all participants
* will receive events (if registered for the PMIX_GROUP_MEMBER_FAILED event)
* whenever a process fails or terminates prior to calling
* PMIx_Group_construct(_nb) the events will contain the identifier of the
* process that failed to join plus any other information that the resource
* manager provided. This provides an opportunity for the leader to react to
* the event e.g., to invite an alternative member to the group or to decide
* to proceed with a smaller group. The decision to proceed with a smaller group
* is communicated to the PMIx library in the results array at the end of the
* event handler. This allows PMIx to properly adjust accounting for procedure
* completion. When construct is complete, the participating PMIx servers will
* be alerted to any change in participants and each group member will (if
* registered) receive a PMIX_GROUP_MEMBERSHIP_UPDATE event updating the group
* membership.
*
* Processes in a group under construction are not allowed to leave the group
* until group construction is complete. Upon completion of the construct
* procedure, each group member will have access to the job-level information
* of all nspaces represented in the group and the contact information for
* every group member.
*
* Failure of the leader at any time will cause a PMIX_GROUP_LEADER_FAILED event
* to be delivered to all participants so they can optionally declare a new leader.
* A new leader is identified by providing the PMIX_GROUP_LEADER attribute in
* the results array in the return of the event handler. Only one process is
* allowed to return that attribute, declaring itself as the new leader. Results
* of the leader selection will be communicated to all participants via a
* PMIX_GROUP_LEADER_SELECTED event identifying the new leader. If no leader
* was selected, then the status code provided in the event handler will provide
* an error value so the participants can take appropriate action.
*
* Any participant that returns PMIX_GROUP_CONSTRUCT_ABORT from the leader failed
* event handler will cause the construct process to abort. Those processes
* engaged in the blocking construct will return from the call with the
* PMIX_GROUP_CONSTRUCT_ABORT status. Non-blocking partipants will have
* their callback function executed with that status.
*
* Some relevant attributes for this operation:
* PMIX_GROUP_LEADER - declare this process to be the leader of the construction
* procedure. If a process provides this attribute, then
* failure notification for any participating process will
* go only to that one process. In the absence of a
* declared leader, failure events go to all participants.
* PMIX_GROUP_OPTIONAL - participation is optional - do not return an error if
* any of the specified processes terminate
* without having joined (default=false)
* PMIX_GROUP_NOTIFY_TERMINATION - notify remaining members when another member
* terminates without first leaving the
* group (default=false)
* PMIX_GROUP_ASSIGN_CONTEXT_ID - requests that the RM assign a unique context
* ID (size_t) to the group. The value is returned
* in the PMIX_GROUP_CONSTRUCT_COMPLETE event
* PMIX_TIMEOUT - return an error if the group doesn't assemble within the
* specified number of seconds. Targets the scenario where a
* process fails to call PMIx_Group_connect due to hanging
*
* Recognizing
*/
PMIX_EXPORT pmix_status_t PMIx_Group_construct(const char grp[],
const pmix_proc_t procs[], size_t nprocs,
const pmix_info_t directives[], size_t ndirs,
pmix_info_t **results, size_t *nresults);
PMIX_EXPORT pmix_status_t PMIx_Group_construct_nb(const char grp[],
const pmix_proc_t procs[], size_t nprocs,
const pmix_info_t info[], size_t ninfo,
pmix_info_cbfunc_t cbfunc, void *cbdata);
/* Explicitly invite specified processes to join a group.
*
* Each invited process will be notified of the invitation via the PMIX_GROUP_INVITED
* event. The processes being invited must have registered for the PMIX_GROUP_INVITED
* event in order to be notified of the invitation. When ready to respond, each invited
* process provides a response using the appropriate form of PMIx_Group_join. This will
* notify the inviting process that the invitation was either accepted (via the
* PMIX_GROUP_INVITE_ACCEPTED event) or declined (via the PMIX_GROUP_INVITE_DECLINED event).
* The inviting process will also receive PMIX_GROUP_MEMBER_FAILED events whenever a
* process fails or terminates prior to responding to the invitation.
*
* Upon accepting the invitation, both the inviting and invited process will receive
* access to the job-level information of each others nspaces and the contact
* information of the other process.
*
* Some relevant attributes for this operation:
* PMIX_GROUP_ASSIGN_CONTEXT_ID - requests that the RM assign a unique context
* ID (size_t) to the group. The value is returned
* in the PMIX_GROUP_CONSTRUCT_COMPLETE event
* PMIX_TIMEOUT (int): return an error if the group doesnt assemble within the
* specified number of seconds. Targets the scenario where a
* process fails to call PMIx_Group_connect due to hanging
*
* The inviting process is automatically considered the leader of the asynchronous
* group construction procedure and will receive all failure or termination events
* for invited members prior to completion. The inviting process is required to
* provide a PMIX_GROUP_CONSTRUCT_COMPLETE event once the group has been fully
* assembled this event will be distributed to all participants along with the
* final membership.
*
* Failure of the leader at any time will cause a PMIX_GROUP_LEADER_FAILED event
* to be delivered to all participants so they can optionally declare a new leader.
* A new leader is identified by providing the PMIX_GROUP_LEADER attribute in
* the results array in the return of the event handler. Only one process is
* allowed to return that attribute, declaring itself as the new leader. Results
* of the leader selection will be communicated to all participants via a
* PMIX_GROUP_LEADER_SELECTED event identifying the new leader. If no leader
* was selected, then the status code provided in the event handler will provide
* an error value so the participants can take appropriate action.
*
* Any participant that returns PMIX_GROUP_CONSTRUCT_ABORT from the event
* handler will cause all participants to receive an event notifying them
* of that status.
*/
PMIX_EXPORT pmix_status_t PMIx_Group_invite(const char grp[],
const pmix_proc_t procs[], size_t nprocs,
const pmix_info_t info[], size_t ninfo,
pmix_info_t **results, size_t *nresult);
PMIX_EXPORT pmix_status_t PMIx_Group_invite_nb(const char grp[],
const pmix_proc_t procs[], size_t nprocs,
const pmix_info_t info[], size_t ninfo,
pmix_info_cbfunc_t cbfunc, void *cbdata);
/* Respond to an invitation to join a group that is being asynchronously constructed.
*
* The process must have registered for the PMIX_GROUP_INVITED event in order to be
* notified of the invitation. When ready to respond, the process provides a response
* using the appropriate form of PMIx_Group_join.
*
* Critical Note: Since the process is alerted to the invitation in a PMIx event handler,
* the process must not use the blocking form of this call unless it first thread shifts
* out of the handler and into its own thread context. Likewise, while it is safe to call
* the non-blocking form of the API from the event handler, the process must not block
* in the handler while waiting for the callback function to be called.
*
* Calling this function causes the group leader to be notified that the process has
* either accepted or declined the request. The blocking form of the API will return
* once the group has been completely constructed or the groups construction has failed
* (as determined by the leader) likewise, the callback function of the non-blocking
* form will be executed upon the same conditions.
*
* Failure of the leader at any time will cause a PMIX_GROUP_LEADER_FAILED event
* to be delivered to all participants so they can optionally declare a new leader.
* A new leader is identified by providing the PMIX_GROUP_LEADER attribute in
* the results array in the return of the event handler. Only one process is
* allowed to return that attribute, declaring itself as the new leader. Results
* of the leader selection will be communicated to all participants via a
* PMIX_GROUP_LEADER_SELECTED event identifying the new leader. If no leader
* was selected, then the status code provided in the event handler will provide
* an error value so the participants can take appropriate action.
*
* Any participant that returns PMIX_GROUP_CONSTRUCT_ABORT from the leader failed
* event handler will cause all participants to receive an event notifying them
* of that status. Similarly, the leader may elect to abort the procedure
* by either returning PMIX_GROUP_CONSTRUCT_ABORT from the handler assigned
* to the PMIX_GROUP_INVITE_ACCEPTED or PMIX_GROUP_INVITE_DECLINED codes, or
* by generating an event for the abort code. Abort events will be sent to
* all invited participants.
*/
PMIX_EXPORT pmix_status_t PMIx_Group_join(const char grp[],
const pmix_proc_t *leader,
pmix_group_opt_t opt,
const pmix_info_t info[], size_t ninfo,
pmix_info_t **results, size_t *nresult);
PMIX_EXPORT pmix_status_t PMIx_Group_join_nb(const char grp[],
const pmix_proc_t *leader,
pmix_group_opt_t opt,
const pmix_info_t info[], size_t ninfo,
pmix_info_cbfunc_t cbfunc, void *cbdata);
/* Leave a PMIx Group. Calls to PMIx_Group_leave (or its non-blocking form) will cause
* a PMIX_GROUP_LEFT event to be generated notifying all members of the group of the
* callers departure. The function will return (or the non-blocking function will
* execute the specified callback function) once the event has been locally generated
* and is not indicative of remote receipt. All PMIx-based collectives such as
* PMIx_Fence in action across the group will automatically be adjusted if the
* collective was called with the PMIX_GROUP_FT_COLLECTIVE attribute (default is
* false) otherwise, the standard error return behavior will be provided.
*
* Critical Note: The PMIx_Group_leave API is intended solely for asynchronous
* departures of individual processes from a group as it is not a scalable
* operation i.e., when a process determines it should no longer be a part of a
* defined group, but the remainder of the group retains a valid reason to continue
* in existence. Developers are advised to use PMIx_Group_destruct (or its
* non-blocking form) for all other scenarios as it represents a more scalable
* operation.
*/
PMIX_EXPORT pmix_status_t PMIx_Group_leave(const char grp[],
const pmix_info_t info[], size_t ninfo);
PMIX_EXPORT pmix_status_t PMIx_Group_leave_nb(const char grp[],
const pmix_info_t info[], size_t ninfo,
pmix_op_cbfunc_t cbfunc, void *cbdata);
/* Destruct a group identified by the provided group identifier. Both blocking and
* non-blocking versions are provided (the callback function for the non-blocking
* form will be called once all members of the group have called destruct).
* Processes may engage in multiple simultaneous group destruct operations as
* desired so long as each involves a unique group ID. The info array can be used
* to pass user-level directives regarding timeout constraints and other options
* available from the PMIx server.
*
* Some relevant attributes for this operation:
*
* PMIX_TIMEOUT (int): return an error if the group doesnt destruct within the
* specified number of seconds. Targets the scenario where
* a process fails to call PMIx_Group_destruct due to hanging
*
* The destruct API will return an error if any group process fails or terminates
* prior to calling PMIx_Group_destruct or its non-blocking version unless the
* PMIX_GROUP_NOTIFY_TERMINATION attribute was provided (with a value of true) at
* time of group construction. If notification was requested, then a event will
* be delivered (using PMIX_GROUP_MEMBER_FAILED) for each process that fails to
* call destruct and the destruct tracker updated to account for the lack of
* participation. The PMIx_Group_destruct operation will subsequently return
* PMIX_SUCCESS when the remaining processes have all called destruct i.e., the
* event will serve in place of return of an error.
*/
PMIX_EXPORT pmix_status_t PMIx_Group_destruct(const char grp[],
const pmix_info_t info[], size_t ninfo);
PMIX_EXPORT pmix_status_t PMIx_Group_destruct_nb(const char grp[],
const pmix_info_t info[], size_t ninfo,
pmix_op_cbfunc_t cbfunc, void *cbdata);
#if defined(c_plusplus) || defined(__cplusplus)
}

Просмотреть файл

@ -4,7 +4,7 @@
* Copyright (c) 2016-2018 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2016 IBM Corporation. All rights reserved.
* Copyright (c) 2016-2017 Mellanox Technologies, Inc.
* Copyright (c) 2016-2018 Mellanox Technologies, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -81,8 +81,8 @@ extern "C" {
/**** PMIX CONSTANTS ****/
/* define maximum value and key sizes */
#define PMIX_MAX_NSLEN 63
#define PMIX_MAX_KEYLEN 63
#define PMIX_MAX_NSLEN 255
#define PMIX_MAX_KEYLEN 511
/* define abstract types for namespaces and keys */
typedef char pmix_nspace_t[PMIX_MAX_NSLEN+1];
@ -106,6 +106,7 @@ typedef uint32_t pmix_rank_t;
/* other special rank values will be used to define
* groups of ranks for use in collectives */
#define PMIX_RANK_LOCAL_NODE UINT32_MAX-2 // all ranks on local node
#define PMIX_RANK_LOCAL_PEERS UINT32_MAX-4 // all peers (i.e., all procs within the same nspace) on local node
/* define an invalid value */
#define PMIX_RANK_INVALID UINT32_MAX-3
@ -179,6 +180,8 @@ typedef uint32_t pmix_rank_t;
#define PMIX_VERSION_INFO "pmix.version" // (char*) PMIx version of contactor
#define PMIX_REQUESTOR_IS_TOOL "pmix.req.tool" // (bool) requesting process is a tool
#define PMIX_REQUESTOR_IS_CLIENT "pmix.req.client" // (bool) requesting process is a client process
#define PMIX_PSET_NAME "pmix.pset.nm" // (char*) user-assigned name for the process
// set containing the given process
/* model attributes */
#define PMIX_PROGRAMMING_MODEL "pmix.pgm.model" // (char*) programming model being initialized (e.g., "MPI" or "OpenMP")
@ -354,6 +357,8 @@ typedef uint32_t pmix_rank_t;
#define PMIX_EVENT_DO_NOT_CACHE "pmix.evnocache" // (bool) instruct the PMIx server not to cache the event
#define PMIX_EVENT_SILENT_TERMINATION "pmix.evsilentterm" // (bool) do not generate an event when this job normally terminates
#define PMIX_EVENT_PROXY "pmix.evproxy" // (pmix_proc_t*) PMIx server that sourced the event
#define PMIX_EVENT_TEXT_MESSAGE "pmix.evtext" // (char*) text message suitable for output by recipient - e.g., describing
// the cause of the event
/* fault tolerance-related events */
#define PMIX_EVENT_TERMINATE_SESSION "pmix.evterm.sess" // (bool) RM intends to terminate session
@ -410,8 +415,11 @@ typedef uint32_t pmix_rank_t;
#define PMIX_FWD_STDERR "pmix.fwd.stderr" // (bool) forward stderr from the spawned processes to this process (typically used by a tool)
#define PMIX_FWD_STDDIAG "pmix.fwd.stddiag" // (bool) if a diagnostic channel exists, forward any output on it
// from the spawned processes to this process (typically used by a tool)
#define PMIX_SPAWN_TOOL "pmix.spwn.tool" // (bool) job being spawned is a tool
/* query attributes */
#define PMIX_QUERY_REFRESH_CACHE "pmix.qry.rfsh" // (bool) retrieve updated information from server
// to update local cache
#define PMIX_QUERY_NAMESPACES "pmix.qry.ns" // (char*) request a comma-delimited list of active nspaces
#define PMIX_QUERY_JOB_STATUS "pmix.qry.jst" // (pmix_status_t) status of a specified currently executing job
#define PMIX_QUERY_QUEUE_LIST "pmix.qry.qlst" // (char*) request a comma-delimited list of scheduler queues
@ -432,6 +440,10 @@ typedef uint32_t pmix_rank_t;
// is being requested
#define PMIX_TIME_REMAINING "pmix.time.remaining" // (char*) query number of seconds (uint32_t) remaining in allocation
// for the specified nspace
#define PMIX_QUERY_NUM_PSETS "pmix.qry.psetnum" // (size_t) return the number of psets defined
// in the specified range (defaults to session)
#define PMIX_QUERY_PSET_NAMES "pmix.qry.psets" // (char*) return a comma-delimited list of the names of the
// psets defined in the specified range (defaults to session)
/* log attributes */
#define PMIX_LOG_SOURCE "pmix.log.source" // (pmix_proc_t*) ID of source of the log request
@ -627,6 +639,23 @@ typedef uint32_t pmix_rank_t;
#define PMIX_SETUP_APP_NONENVARS "pmix.setup.nenv" // (bool) include all non-envar data
#define PMIX_SETUP_APP_ALL "pmix.setup.all" // (bool) include all relevant data
/* Attributes supporting the PMIx Groups APIs */
#define PMIX_GROUP_ID "pmix.grp.id" // (char*) user-provided group identifier
#define PMIX_GROUP_LEADER "pmix.grp.ldr" // (bool) this process is the leader of the group
#define PMIX_GROUP_OPTIONAL "pmix.grp.opt" // (bool) participation is optional - do not return an error if any of the
// specified processes terminate without having joined. The default
// is false
#define PMIX_GROUP_NOTIFY_TERMINATION "pmix.grp.notterm" // (bool) notify remaining members when another member terminates without first
// leaving the group. The default is false
#define PMIX_GROUP_INVITE_DECLINE "pmix.grp.decline" // (bool) notify the inviting process that this process does not wish to
// participate in the proposed group The default is true
#define PMIX_GROUP_FT_COLLECTIVE "pmix.grp.ftcoll" // (bool) adjust internal tracking for terminated processes. Default is false
#define PMIX_GROUP_MEMBERSHIP "pmix.grp.mbrs" // (pmix_data_array_t*) array of group member ID's
#define PMIX_GROUP_ASSIGN_CONTEXT_ID "pmix.grp.actxid" // (bool) request that the RM assign a unique numerical (size_t) ID to this group
#define PMIX_GROUP_CONTEXT_ID "pmix.grp.ctxid" // (size_t) context ID assigned to group
#define PMIX_GROUP_LOCAL_ONLY "pmix.grp.lcl" // (bool) group operation only involves local procs
#define PMIX_GROUP_ENDPT_DATA "pmix.grp.endpt" // (pmix_byte_object_t) data collected to be shared during construction
/**** PROCESS STATE DEFINITIONS ****/
typedef uint8_t pmix_proc_state_t;
@ -738,75 +767,71 @@ typedef int pmix_status_t;
#define PMIX_ERR_V2X_BASE -100
/* v2.x communication errors */
#define PMIX_ERR_LOST_CONNECTION_TO_SERVER (PMIX_ERR_V2X_BASE - 1)
#define PMIX_ERR_LOST_PEER_CONNECTION (PMIX_ERR_V2X_BASE - 2)
#define PMIX_ERR_LOST_CONNECTION_TO_CLIENT (PMIX_ERR_V2X_BASE - 3)
#define PMIX_ERR_LOST_CONNECTION_TO_SERVER -101
#define PMIX_ERR_LOST_PEER_CONNECTION -102
#define PMIX_ERR_LOST_CONNECTION_TO_CLIENT -103
/* used by the query system */
#define PMIX_QUERY_PARTIAL_SUCCESS (PMIX_ERR_V2X_BASE - 4)
#define PMIX_QUERY_PARTIAL_SUCCESS -104
/* request responses */
#define PMIX_NOTIFY_ALLOC_COMPLETE (PMIX_ERR_V2X_BASE - 5)
#define PMIX_NOTIFY_ALLOC_COMPLETE -105
/* job control */
#define PMIX_JCTRL_CHECKPOINT (PMIX_ERR_V2X_BASE - 6) // monitored by client to trigger checkpoint operation
#define PMIX_JCTRL_CHECKPOINT_COMPLETE (PMIX_ERR_V2X_BASE - 7) // sent by client and monitored by server to notify that requested
#define PMIX_JCTRL_CHECKPOINT -106 // monitored by client to trigger checkpoint operation
#define PMIX_JCTRL_CHECKPOINT_COMPLETE -107 // sent by client and monitored by server to notify that requested
// checkpoint operation has completed
#define PMIX_JCTRL_PREEMPT_ALERT (PMIX_ERR_V2X_BASE - 8) // monitored by client to detect RM intends to preempt
/* monitoring */
#define PMIX_MONITOR_HEARTBEAT_ALERT (PMIX_ERR_V2X_BASE - 9)
#define PMIX_MONITOR_FILE_ALERT (PMIX_ERR_V2X_BASE - 10)
#define PMIX_PROC_TERMINATED (PMIX_ERR_V2X_BASE - 11)
#define PMIX_ERR_INVALID_TERMINATION (PMIX_ERR_V2X_BASE - 12)
#define PMIX_JCTRL_PREEMPT_ALERT -108 // monitored by client to detect RM intends to preempt
/* define a starting point for operational error constants so
* we avoid renumbering when making additions */
#define PMIX_ERR_OP_BASE PMIX_ERR_V2X_BASE-100
/* monitoring */
#define PMIX_MONITOR_HEARTBEAT_ALERT -109
#define PMIX_MONITOR_FILE_ALERT -110
#define PMIX_PROC_TERMINATED -111
#define PMIX_ERR_INVALID_TERMINATION -112
/* operational */
#define PMIX_ERR_EVENT_REGISTRATION (PMIX_ERR_OP_BASE - 14)
#define PMIX_ERR_JOB_TERMINATED (PMIX_ERR_OP_BASE - 15)
#define PMIX_ERR_UPDATE_ENDPOINTS (PMIX_ERR_OP_BASE - 16)
#define PMIX_MODEL_DECLARED (PMIX_ERR_OP_BASE - 17)
#define PMIX_GDS_ACTION_COMPLETE (PMIX_ERR_OP_BASE - 18)
#define PMIX_PROC_HAS_CONNECTED (PMIX_ERR_OP_BASE - 19)
#define PMIX_CONNECT_REQUESTED (PMIX_ERR_OP_BASE - 20)
#define PMIX_MODEL_RESOURCES (PMIX_ERR_OP_BASE - 21) // model resource usage has changed
#define PMIX_OPENMP_PARALLEL_ENTERED (PMIX_ERR_OP_BASE - 22) // an OpenMP parallel region has been entered
#define PMIX_OPENMP_PARALLEL_EXITED (PMIX_ERR_OP_BASE - 23) // an OpenMP parallel region has completed
#define PMIX_LAUNCH_DIRECTIVE (PMIX_ERR_OP_BASE - 24)
#define PMIX_LAUNCHER_READY (PMIX_ERR_OP_BASE - 25)
#define PMIX_OPERATION_IN_PROGRESS (PMIX_ERR_OP_BASE - 26)
#define PMIX_OPERATION_SUCCEEDED (PMIX_ERR_OP_BASE - 27)
#define PMIX_ERR_INVALID_OPERATION (PMIX_ERR_OP_BASE - 28)
/* define a starting point for system error constants so
* we avoid renumbering when making additions - host environments
+ * are responsible for translating their own event codes into
+ * the closest PMIx equivalent value */
#define PMIX_ERR_SYS_BASE PMIX_ERR_OP_BASE-200
#define PMIX_ERR_EVENT_REGISTRATION -144
#define PMIX_ERR_JOB_TERMINATED -145
#define PMIX_ERR_UPDATE_ENDPOINTS -146
#define PMIX_MODEL_DECLARED -147
#define PMIX_GDS_ACTION_COMPLETE -148
#define PMIX_PROC_HAS_CONNECTED -149
#define PMIX_CONNECT_REQUESTED -150
#define PMIX_MODEL_RESOURCES -151 // model resource usage has changed
#define PMIX_OPENMP_PARALLEL_ENTERED -152 // an OpenMP parallel region has been entered
#define PMIX_OPENMP_PARALLEL_EXITED -153 // an OpenMP parallel region has completed
#define PMIX_LAUNCH_DIRECTIVE -154
#define PMIX_LAUNCHER_READY -155
#define PMIX_OPERATION_IN_PROGRESS -156
#define PMIX_OPERATION_SUCCEEDED -157
#define PMIX_ERR_INVALID_OPERATION -158
#define PMIX_GROUP_INVITED -159
#define PMIX_GROUP_LEFT -160
#define PMIX_GROUP_INVITE_ACCEPTED -161
#define PMIX_GROUP_INVITE_DECLINED -162
#define PMIX_GROUP_INVITE_FAILED -163
#define PMIX_GROUP_MEMBERSHIP_UPDATE -164
#define PMIX_GROUP_CONSTRUCT_ABORT -165
#define PMIX_GROUP_CONSTRUCT_COMPLETE -166
#define PMIX_GROUP_LEADER_SELECTED -167
#define PMIX_GROUP_LEADER_FAILED -168
#define PMIX_GROUP_CONTEXT_ID_ASSIGNED -169
/* system failures */
#define PMIX_ERR_NODE_DOWN (PMIX_ERR_SYS_BASE - 0)
#define PMIX_ERR_NODE_OFFLINE (PMIX_ERR_SYS_BASE - 1)
#define PMIX_ERR_SYS_OTHER (PMIX_ERR_EVHDLR_BASE + 1)
#define PMIX_ERR_NODE_DOWN -231
#define PMIX_ERR_NODE_OFFLINE -232
#define PMIX_ERR_SYS_OTHER -330
/* define a macro for identifying system event values */
#define PMIX_SYSTEM_EVENT(a) \
(PMIX_ERR_SYS_BASE >= (a) && PMIX_ERR_EVHDLR_BASE < (a))
/* define a starting point for event handler error constants so
* we avoid renumbering when making additions */
#define PMIX_ERR_EVHDLR_BASE PMIX_ERR_SYS_BASE-500
(230 > (a) && -331 < (a))
/* used by event handlers */
#define PMIX_EVENT_NO_ACTION_TAKEN (PMIX_ERR_EVHDLR_BASE - 1)
#define PMIX_EVENT_PARTIAL_ACTION_TAKEN (PMIX_ERR_EVHDLR_BASE - 2)
#define PMIX_EVENT_ACTION_DEFERRED (PMIX_ERR_EVHDLR_BASE - 3)
#define PMIX_EVENT_ACTION_COMPLETE (PMIX_ERR_EVHDLR_BASE - 4)
#define PMIX_EVENT_NO_ACTION_TAKEN -331
#define PMIX_EVENT_PARTIAL_ACTION_TAKEN -332
#define PMIX_EVENT_ACTION_DEFERRED -333
#define PMIX_EVENT_ACTION_COMPLETE -334
/* define a starting point for PMIx internal error codes
* that are never exposed outside the library */
#define PMIX_INTERNAL_ERR_BASE PMIX_ERR_EVHDLR_BASE-1000
#define PMIX_INTERNAL_ERR_BASE -1330
/* define a starting point for user-level defined error
* constants - negative values larger than this are guaranteed
@ -948,16 +973,35 @@ typedef uint16_t pmix_iof_channel_t;
#define PMIX_FWD_STDDIAG_CHANNEL 0x0008
#define PMIX_FWD_ALL_CHANNELS 0x00ff
/* define values associated with PMIx_Group_join
* to indicate accept and decline - this is
* done for readability of user code */
typedef enum {
PMIX_GROUP_DECLINE,
PMIX_GROUP_ACCEPT
} pmix_group_opt_t;
typedef enum {
PMIX_GROUP_CONSTRUCT,
PMIX_GROUP_DESTRUCT
} pmix_group_operation_t;
/* declare a convenience macro for checking keys */
#define PMIX_CHECK_KEY(a, b) \
(0 == strncmp((a)->key, (b), PMIX_MAX_KEYLEN))
#define PMIX_LOAD_KEY(a, b) \
do { \
memset((a), 0, PMIX_MAX_KEYLEN+1); \
pmix_strncpy((a), (b), PMIX_MAX_KEYLEN); \
}while(0)
/* define a convenience macro for loading nspaces */
#define PMIX_LOAD_NSPACE(a, b) \
do { \
memset((a), 0, PMIX_MAX_NSLEN+1); \
(void)strncpy((a), (b), PMIX_MAX_NSLEN); \
pmix_strncpy((a), (b), PMIX_MAX_NSLEN); \
}while(0)
/* define a convenience macro for checking nspaces */
@ -1012,6 +1056,7 @@ typedef struct pmix_byte_object {
} \
} \
free((m)); \
(m) = NULL; \
} while(0)
#define PMIX_BYTE_OBJECT_LOAD(b, d, s) \
@ -1406,11 +1451,33 @@ typedef struct pmix_value {
(n) = (t)((m)->data.fval); \
} else if (PMIX_DOUBLE == (m)->type) { \
(n) = (t)((m)->data.dval); \
} else if (PMIX_PID == (m)->type) { \
(n) = (t)((m)->data.pid); \
} else { \
(s) = PMIX_ERR_BAD_PARAM; \
} \
} while(0)
#define PMIX_VALUE_COMPRESSED_STRING_UNPACK(s) \
do { \
char *tmp; \
/* if this is a compressed string, then uncompress it */ \
if (PMIX_COMPRESSED_STRING == (s)->type) { \
pmix_util_uncompress_string(&tmp, (uint8_t*)(s)->data.bo.bytes, \
(s)->data.bo.size); \
if (NULL == tmp) { \
PMIX_ERROR_LOG(PMIX_ERR_NOMEM); \
rc = PMIX_ERR_NOMEM; \
PMIX_VALUE_RELEASE(s); \
val = NULL; \
} else { \
PMIX_VALUE_DESTRUCT(s); \
(s)->data.string = tmp; \
(s)->type = PMIX_STRING; \
} \
} \
} while(0)
/**** PMIX INFO STRUCT ****/
typedef struct pmix_info {
pmix_key_t key;
@ -1450,7 +1517,7 @@ typedef struct pmix_info {
#define PMIX_INFO_LOAD(m, k, v, t) \
do { \
if (NULL != (k)) { \
(void)strncpy((m)->key, (k), PMIX_MAX_KEYLEN); \
pmix_strncpy((m)->key, (k), PMIX_MAX_KEYLEN); \
} \
(m)->flags = 0; \
pmix_value_load(&((m)->value), (v), (t)); \
@ -1458,7 +1525,7 @@ typedef struct pmix_info {
#define PMIX_INFO_XFER(d, s) \
do { \
if (NULL != (s)->key) { \
(void)strncpy((d)->key, (s)->key, PMIX_MAX_KEYLEN); \
pmix_strncpy((d)->key, (s)->key, PMIX_MAX_KEYLEN); \
} \
(d)->flags = (s)->flags; \
pmix_value_xfer(&(d)->value, (pmix_value_t*)&(s)->value); \
@ -1531,9 +1598,9 @@ typedef struct pmix_pdata {
do { \
if (NULL != (m)) { \
memset((m), 0, sizeof(pmix_pdata_t)); \
(void)strncpy((m)->proc.nspace, (p)->nspace, PMIX_MAX_NSLEN); \
pmix_strncpy((m)->proc.nspace, (p)->nspace, PMIX_MAX_NSLEN); \
(m)->proc.rank = (p)->rank; \
(void)strncpy((m)->key, (k), PMIX_MAX_KEYLEN); \
pmix_strncpy((m)->key, (k), PMIX_MAX_KEYLEN); \
pmix_value_load(&((m)->value), (v), (t)); \
} \
} while (0)
@ -1542,9 +1609,9 @@ typedef struct pmix_pdata {
do { \
if (NULL != (d)) { \
memset((d), 0, sizeof(pmix_pdata_t)); \
(void)strncpy((d)->proc.nspace, (s)->proc.nspace, PMIX_MAX_NSLEN); \
pmix_strncpy((d)->proc.nspace, (s)->proc.nspace, PMIX_MAX_NSLEN); \
(d)->proc.rank = (s)->proc.rank; \
(void)strncpy((d)->key, (s)->key, PMIX_MAX_KEYLEN); \
pmix_strncpy((d)->key, (s)->key, PMIX_MAX_KEYLEN); \
pmix_value_xfer(&((d)->value), &((s)->value)); \
} \
} while (0)
@ -2141,7 +2208,7 @@ PMIX_EXPORT void PMIx_Deregister_event_handler(size_t evhdlr_ref,
PMIX_EXPORT pmix_status_t PMIx_Notify_event(pmix_status_t status,
const pmix_proc_t *source,
pmix_data_range_t range,
pmix_info_t info[], size_t ninfo,
const pmix_info_t info[], size_t ninfo,
pmix_op_cbfunc_t cbfunc, void *cbdata);
/* Provide a string representation for several types of value. Note
@ -2459,6 +2526,8 @@ static inline void pmix_value_destruct(pmix_value_t * m) {
}
} else if (PMIX_ENVAR == (m)->type) {
PMIX_ENVAR_DESTRUCT(&(m)->data.envar);
} else if (PMIX_PROC == (m)->type) {
PMIX_PROC_RELEASE((m)->data.proc);
}
}

Просмотреть файл

@ -63,6 +63,18 @@
extern "C" {
#endif
/* declare a convenience macro for checking keys */
#define PMIX_CHECK_KEY(a, b) \
(0 == strncmp((a)->key, (b), PMIX_MAX_KEYLEN))
/* define a convenience macro for checking nspaces */
#define PMIX_CHECK_NSPACE(a, b) \
(0 == strncmp((a), (b), PMIX_MAX_NSLEN))
/* define a convenience macro for checking names */
#define PMIX_CHECK_PROCID(a, b) \
(PMIX_CHECK_NSPACE((a)->nspace, (b)->nspace) && ((a)->rank == (b)->rank || (PMIX_RANK_WILDCARD == (a)->rank || PMIX_RANK_WILDCARD == (b)->rank)))
/* expose some functions that are resolved in the
* PMIx library, but part of a header that
* includes internal functions - we don't
@ -73,7 +85,7 @@ void pmix_value_load(pmix_value_t *v, const void *data, pmix_data_type_t type);
pmix_status_t pmix_value_unload(pmix_value_t *kv, void **data, size_t *sz);
pmix_status_t pmix_value_xfer(pmix_value_t *kv, pmix_value_t *src);
pmix_status_t pmix_value_xfer(pmix_value_t *kv, const pmix_value_t *src);
pmix_status_t pmix_argv_append_nosize(char ***argv, const char *arg);

Просмотреть файл

@ -489,6 +489,35 @@ typedef pmix_status_t (*pmix_server_stdin_fn_t)(const pmix_proc_t *source,
pmix_op_cbfunc_t cbfunc, void *cbdata);
/* Perform a "fence" operation across the specified procs, plus any special
* actions included in the directives. Return the result of any special action
* requests in the info cbfunc when the fence is completed. Actions may include:
*
* PMIX_GROUP_ASSIGN_CONTEXT_ID - request that the RM assign a unique
* numerical (size_t) ID to this group
*
* grp - user-assigned string ID of this group
*
* op - pmix_group_operation_t value indicating the operation to perform
* Current values support construct and destruct of the group
*
* procs - pointer to array of pmix_proc_t ID's of group members
*
* nprocs - number of group members
*
* directives - array of key-value attributes specifying special actions.
*
* ndirs - size of the directives array
*
* cbfunc - callback function when the operation is completed
*
* cbdata - object to be returned in cbfunc
*/
typedef pmix_status_t (*pmix_server_grp_fn_t)(pmix_group_operation_t op, char grp[],
const pmix_proc_t procs[], size_t nprocs,
const pmix_info_t directives[], size_t ndirs,
pmix_info_cbfunc_t cbfunc, void *cbdata);
typedef struct pmix_server_module_2_0_0_t {
/* v1x interfaces */
pmix_server_client_connected_fn_t client_connected;
@ -518,6 +547,8 @@ typedef struct pmix_server_module_2_0_0_t {
pmix_server_validate_cred_fn_t validate_credential;
pmix_server_iof_fn_t iof_pull;
pmix_server_stdin_fn_t push_stdin;
/* v4x interfaces */
pmix_server_grp_fn_t group;
} pmix_server_module_t;
/**** HOST RM FUNCTIONS FOR INTERFACE TO PMIX SERVER ****/

Просмотреть файл

@ -11,7 +11,7 @@
# All rights reserved.
# Copyright (c) 2006-2016 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2012-2013 Los Alamos National Security, Inc. All rights reserved.
# Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2013-2016 Intel, Inc. All rights reserved
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -15,7 +15,7 @@
# reserved.
# Copyright (c) 2017 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
@ -30,7 +30,8 @@ headers += \
atomics/sys/atomic.h \
atomics/sys/atomic_impl.h \
atomics/sys/timer.h \
atomics/sys/cma.h
atomics/sys/cma.h \
atomics/sys/atomic_stdc.h
include atomics/sys/x86_64/Makefile.include
include atomics/sys/arm/Makefile.include

Просмотреть файл

@ -10,7 +10,7 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2011 Sandia National Laboratories. All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2016 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2017 Research Organization for Information Science
@ -47,6 +47,7 @@
#define PMIX_BUILTIN_SYNC 0200
#define PMIX_BUILTIN_GCC 0202
#define PMIX_BUILTIN_NO 0203
#define PMIX_BUILTIN_C11 0204
/* Formats */
#define PMIX_DEFAULT 1000 /* standard for given architecture */

Просмотреть файл

@ -9,7 +9,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -12,9 +12,9 @@
* All rights reserved.
* Copyright (c) 2010 IBM Corporation. All rights reserved.
* Copyright (c) 2010 ARM ltd. All rights reserved.
* Copyright (c) 2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2017-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -110,7 +110,7 @@ void pmix_atomic_isync(void)
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32 1
#define PMIX_HAVE_ATOMIC_MATH_32 1
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev, tmp;
bool ret;
@ -138,7 +138,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
bool rc;
@ -149,7 +149,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_32 (addr, oldval, newval);
@ -158,7 +158,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t
#if (PMIX_ASM_SUPPORT_64BIT == 1)
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64 1
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
int tmp;
@ -189,7 +189,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
bool rc;
@ -200,7 +200,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_64 (addr, oldval, newval);
@ -210,7 +210,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t
#define PMIX_HAVE_ATOMIC_ADD_32 1
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int inc)
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t* v, int inc)
{
int32_t t, old;
int tmp;
@ -231,7 +231,7 @@ static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int inc)
}
#define PMIX_HAVE_ATOMIC_SUB_32 1
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t* v, int dec)
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t* v, int dec)
{
int32_t t, old;
int tmp;

Просмотреть файл

@ -2,7 +2,6 @@
* Copyright (c) 2008 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -9,7 +9,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -12,9 +12,9 @@
* All rights reserved.
* Copyright (c) 2010 IBM Corporation. All rights reserved.
* Copyright (c) 2010 ARM ltd. All rights reserved.
* Copyright (c) 2016-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2016-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -83,7 +83,7 @@ static inline void pmix_atomic_isync (void)
*
*********************************************************************/
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev, tmp;
bool ret;
@ -103,7 +103,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
return ret;
}
static inline int32_t pmix_atomic_swap_32(volatile int32_t *addr, int32_t newval)
static inline int32_t pmix_atomic_swap_32(pmix_atomic_int32_t *addr, int32_t newval)
{
int32_t ret, tmp;
@ -122,7 +122,7 @@ static inline int32_t pmix_atomic_swap_32(volatile int32_t *addr, int32_t newval
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev, tmp;
bool ret;
@ -143,7 +143,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev, tmp;
bool ret;
@ -165,7 +165,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t
#define pmix_atomic_ll_32(addr, ret) \
do { \
volatile int32_t *_addr = (addr); \
pmix_atomic_int32_t *_addr = (addr); \
int32_t _ret; \
\
__asm__ __volatile__ ("ldaxr %w0, [%1] \n" \
@ -177,7 +177,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t
#define pmix_atomic_sc_32(addr, newval, ret) \
do { \
volatile int32_t *_addr = (addr); \
pmix_atomic_int32_t *_addr = (addr); \
int32_t _newval = (int32_t) newval; \
int _ret; \
\
@ -189,7 +189,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t
ret = (_ret == 0); \
} while (0)
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
int tmp;
@ -210,7 +210,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
return ret;
}
static inline int64_t pmix_atomic_swap_64 (volatile int64_t *addr, int64_t newval)
static inline int64_t pmix_atomic_swap_64 (pmix_atomic_int64_t *addr, int64_t newval)
{
int64_t ret;
int tmp;
@ -230,7 +230,7 @@ static inline int64_t pmix_atomic_swap_64 (volatile int64_t *addr, int64_t newva
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
int tmp;
@ -252,7 +252,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
int tmp;
@ -275,7 +275,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t
#define pmix_atomic_ll_64(addr, ret) \
do { \
volatile int64_t *_addr = (addr); \
pmix_atomic_int64_t *_addr = (addr); \
int64_t _ret; \
\
__asm__ __volatile__ ("ldaxr %0, [%1] \n" \
@ -287,7 +287,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t
#define pmix_atomic_sc_64(addr, newval, ret) \
do { \
volatile int64_t *_addr = (addr); \
pmix_atomic_int64_t *_addr = (addr); \
int64_t _newval = (int64_t) newval; \
int _ret; \
\
@ -300,7 +300,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t
} while (0)
#define PMIX_ASM_MAKE_ATOMIC(type, bits, name, inst, reg) \
static inline type pmix_atomic_fetch_ ## name ## _ ## bits (volatile type *addr, type value) \
static inline type pmix_atomic_fetch_ ## name ## _ ## bits (pmix_atomic_ ## type *addr, type value) \
{ \
type newval, old; \
int32_t tmp; \

Просмотреть файл

@ -6,7 +6,6 @@
* Copyright (c) 2016 Broadcom Limited. All rights reserved.
* Copyright (c) 2016 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -16,7 +16,7 @@
* reserved.
* Copyright (c) 2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -57,7 +57,13 @@
#include <stdbool.h>
#include "src/atomics/sys/architecture.h"
#include "src/include/pmix_stdint.h"
#include "src/include/pmix_stdatomic.h"
#if PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_C11
#include "atomic_stdc.h"
#else /* !PMIX_C_HAVE__ATOMIC */
/* do some quick #define cleanup in cases where we are doing
testing... */
@ -93,7 +99,7 @@ BEGIN_C_DECLS
*/
struct pmix_atomic_lock_t {
union {
volatile int32_t lock; /**< The lock address (an integer) */
pmix_atomic_int32_t lock; /**< The lock address (an integer) */
volatile unsigned char sparc_lock; /**< The lock address on sparc */
char padding[sizeof(int)]; /**< Array for optional padding */
} u;
@ -148,6 +154,8 @@ enum {
PMIX_ATOMIC_LOCK_LOCKED = 1
};
#define PMIX_ATOMIC_LOCK_INIT {.u = {.lock = PMIX_ATOMIC_LOCK_UNLOCKED}}
/**********************************************************************
*
* Load the appropriate architecture files and set some reasonable
@ -351,19 +359,19 @@ void pmix_atomic_unlock(pmix_atomic_lock_t *lock);
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_32
static inline
#endif
bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval,
bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval,
int32_t newval);
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_32
static inline
#endif
bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval,
bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval,
int32_t newval);
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_32
static inline
#endif
bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval,
bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval,
int32_t newval);
#endif
@ -376,19 +384,19 @@ bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_64
static inline
#endif
bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval,
bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval,
int64_t newval);
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_64
static inline
#endif
bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval,
bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval,
int64_t newval);
#if PMIX_HAVE_INLINE_ATOMIC_COMPARE_EXCHANGE_64
static inline
#endif
bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval,
bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval,
int64_t newval);
#endif
@ -400,20 +408,20 @@ bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t
#if defined(DOXYGEN) || PMIX_HAVE_ATOMIC_MATH_32 || PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32
static inline int32_t pmix_atomic_add_fetch_32(volatile int32_t *addr, int delta);
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t *addr, int delta);
static inline int32_t pmix_atomic_and_fetch_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_and_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_or_fetch_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_or_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_xor_fetch_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_xor_32(volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_sub_fetch_32(volatile int32_t *addr, int delta);
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t *addr, int delta);
static inline int32_t pmix_atomic_min_fetch_32 (volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_min_32 (volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_max_fetch_32 (volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_max_32 (volatile int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_add_fetch_32(pmix_atomic_int32_t *addr, int delta);
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t *addr, int delta);
static inline int32_t pmix_atomic_and_fetch_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_and_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_or_fetch_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_or_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_xor_fetch_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_xor_32(pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_sub_fetch_32(pmix_atomic_int32_t *addr, int delta);
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t *addr, int delta);
static inline int32_t pmix_atomic_min_fetch_32 (pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_min_32 (pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_max_fetch_32 (pmix_atomic_int32_t *addr, int32_t value);
static inline int32_t pmix_atomic_fetch_max_32 (pmix_atomic_int32_t *addr, int32_t value);
#endif /* PMIX_HAVE_ATOMIC_MATH_32 */
@ -430,19 +438,19 @@ static inline int32_t pmix_atomic_fetch_max_32 (volatile int32_t *addr, int32_t
#if defined(DOXYGEN) || PMIX_HAVE_ATOMIC_MATH_64 || PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64
static inline int64_t pmix_atomic_add_fetch_64(volatile int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_fetch_add_64(volatile int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_and_fetch_64(volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_and_64(volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_or_fetch_64(volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_or_64(volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_xor_64(volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_sub_fetch_64(volatile int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_min_fetch_64 (volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_min_64 (volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_max_fetch_64 (volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_max_64 (volatile int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_add_fetch_64(pmix_atomic_int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_fetch_add_64(pmix_atomic_int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_and_fetch_64(pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_and_64(pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_or_fetch_64(pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_or_64(pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_xor_64(pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_sub_fetch_64(pmix_atomic_int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_fetch_sub_64(pmix_atomic_int64_t *addr, int64_t delta);
static inline int64_t pmix_atomic_min_fetch_64 (pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_min_64 (pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_max_fetch_64 (pmix_atomic_int64_t *addr, int64_t value);
static inline int64_t pmix_atomic_fetch_max_64 (pmix_atomic_int64_t *addr, int64_t value);
#endif /* PMIX_HAVE_ATOMIC_MATH_64 */
@ -459,7 +467,7 @@ static inline int64_t pmix_atomic_fetch_max_64 (volatile int64_t *addr, int64_t
*/
#if defined(DOXYGEN) || PMIX_ENABLE_DEBUG
static inline size_t
pmix_atomic_add_fetch_size_t(volatile size_t *addr, size_t delta)
pmix_atomic_add_fetch_size_t(pmix_atomic_size_t *addr, size_t delta)
{
#if SIZEOF_SIZE_T == 4
return (size_t) pmix_atomic_add_fetch_32((int32_t*) addr, delta);
@ -471,7 +479,7 @@ pmix_atomic_add_fetch_size_t(volatile size_t *addr, size_t delta)
}
static inline size_t
pmix_atomic_fetch_add_size_t(volatile size_t *addr, size_t delta)
pmix_atomic_fetch_add_size_t(pmix_atomic_size_t *addr, size_t delta)
{
#if SIZEOF_SIZE_T == 4
return (size_t) pmix_atomic_fetch_add_32((int32_t*) addr, delta);
@ -483,7 +491,7 @@ pmix_atomic_fetch_add_size_t(volatile size_t *addr, size_t delta)
}
static inline size_t
pmix_atomic_sub_fetch_size_t(volatile size_t *addr, size_t delta)
pmix_atomic_sub_fetch_size_t(pmix_atomic_size_t *addr, size_t delta)
{
#if SIZEOF_SIZE_T == 4
return (size_t) pmix_atomic_sub_fetch_32((int32_t*) addr, delta);
@ -495,7 +503,7 @@ pmix_atomic_sub_fetch_size_t(volatile size_t *addr, size_t delta)
}
static inline size_t
pmix_atomic_fetch_sub_size_t(volatile size_t *addr, size_t delta)
pmix_atomic_fetch_sub_size_t(pmix_atomic_size_t *addr, size_t delta)
{
#if SIZEOF_SIZE_T == 4
return (size_t) pmix_atomic_fetch_sub_32((int32_t*) addr, delta);
@ -508,15 +516,15 @@ pmix_atomic_fetch_sub_size_t(volatile size_t *addr, size_t delta)
#else
#if SIZEOF_SIZE_T == 4
#define pmix_atomic_add_fetch_size_t(addr, delta) ((size_t) pmix_atomic_add_fetch_32((volatile int32_t *) addr, delta))
#define pmix_atomic_fetch_add_size_t(addr, delta) ((size_t) pmix_atomic_fetch_add_32((volatile int32_t *) addr, delta))
#define pmix_atomic_sub_fetch_size_t(addr, delta) ((size_t) pmix_atomic_sub_fetch_32((volatile int32_t *) addr, delta))
#define pmix_atomic_fetch_sub_size_t(addr, delta) ((size_t) pmix_atomic_fetch_sub_32((volatile int32_t *) addr, delta))
#define pmix_atomic_add_fetch_size_t(addr, delta) ((size_t) pmix_atomic_add_fetch_32((pmix_atomic_int32_t *) addr, delta))
#define pmix_atomic_fetch_add_size_t(addr, delta) ((size_t) pmix_atomic_fetch_add_32((pmix_atomic_int32_t *) addr, delta))
#define pmix_atomic_sub_fetch_size_t(addr, delta) ((size_t) pmix_atomic_sub_fetch_32((pmix_atomic_int32_t *) addr, delta))
#define pmix_atomic_fetch_sub_size_t(addr, delta) ((size_t) pmix_atomic_fetch_sub_32((pmix_atomic_int32_t *) addr, delta))
#elif SIZEOF_SIZE_T == 8
#define pmix_atomic_add_fetch_size_t(addr, delta) ((size_t) pmix_atomic_add_fetch_64((volatile int64_t *) addr, delta))
#define pmix_atomic_fetch_add_size_t(addr, delta) ((size_t) pmix_atomic_fetch_add_64((volatile int64_t *) addr, delta))
#define pmix_atomic_sub_fetch_size_t(addr, delta) ((size_t) pmix_atomic_sub_fetch_64((volatile int64_t *) addr, delta))
#define pmix_atomic_fetch_sub_size_t(addr, delta) ((size_t) pmix_atomic_fetch_sub_64((volatile int64_t *) addr, delta))
#define pmix_atomic_add_fetch_size_t(addr, delta) ((size_t) pmix_atomic_add_fetch_64((pmix_atomic_int64_t *) addr, delta))
#define pmix_atomic_fetch_add_size_t(addr, delta) ((size_t) pmix_atomic_fetch_add_64((pmix_atomic_int64_t *) addr, delta))
#define pmix_atomic_sub_fetch_size_t(addr, delta) ((size_t) pmix_atomic_sub_fetch_64((pmix_atomic_int64_t *) addr, delta))
#define pmix_atomic_fetch_sub_size_t(addr, delta) ((size_t) pmix_atomic_fetch_sub_64((pmix_atomic_int64_t *) addr, delta))
#else
#error "Unknown size_t size"
#endif
@ -526,20 +534,20 @@ pmix_atomic_fetch_sub_size_t(volatile size_t *addr, size_t delta)
/* these are always done with inline functions, so always mark as
static inline */
static inline bool pmix_atomic_compare_exchange_strong_xx (volatile void *addr, void *oldval,
static inline bool pmix_atomic_compare_exchange_strong_xx (pmix_atomic_intptr_t *addr, intptr_t *oldval,
int64_t newval, size_t length);
static inline bool pmix_atomic_compare_exchange_strong_acq_xx (volatile void *addr, void *oldval,
static inline bool pmix_atomic_compare_exchange_strong_acq_xx (pmix_atomic_intptr_t *addr, intptr_t *oldval,
int64_t newval, size_t length);
static inline bool pmix_atomic_compare_exchange_strong_rel_xx (volatile void *addr, void *oldval,
static inline bool pmix_atomic_compare_exchange_strong_rel_xx (pmix_atomic_intptr_t *addr, intptr_t *oldval,
int64_t newval, size_t length);
static inline bool pmix_atomic_compare_exchange_strong_ptr (volatile void* addr, void *oldval,
void *newval);
static inline bool pmix_atomic_compare_exchange_strong_acq_ptr (volatile void* addr, void *oldval,
void *newval);
static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (volatile void* addr, void *oldval,
void *newval);
static inline bool pmix_atomic_compare_exchange_strong_ptr (pmix_atomic_intptr_t* addr, intptr_t *oldval,
intptr_t newval);
static inline bool pmix_atomic_compare_exchange_strong_acq_ptr (pmix_atomic_intptr_t* addr, intptr_t *oldval,
intptr_t newval);
static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (pmix_atomic_intptr_t* addr, intptr_t *oldval,
intptr_t newval);
/**
* Atomic compare and set of generic type with relaxed semantics. This
@ -555,7 +563,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (volatile void* a
* See pmix_atomic_compare_exchange_* for pseudo-code.
*/
#define pmix_atomic_compare_exchange_strong( ADDR, OLDVAL, NEWVAL ) \
pmix_atomic_compare_exchange_strong_xx( (volatile void*)(ADDR), (void *)(OLDVAL), \
pmix_atomic_compare_exchange_strong_xx( (pmix_atomic_intptr_t*)(ADDR), (intptr_t *)(OLDVAL), \
(intptr_t)(NEWVAL), sizeof(*(ADDR)) )
/**
@ -572,7 +580,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (volatile void* a
* See pmix_atomic_compare_exchange_acq_* for pseudo-code.
*/
#define pmix_atomic_compare_exchange_strong_acq( ADDR, OLDVAL, NEWVAL ) \
pmix_atomic_compare_exchange_strong_acq_xx( (volatile void*)(ADDR), (void *)(OLDVAL), \
pmix_atomic_compare_exchange_strong_acq_xx( (pmix_atomic_intptr_t*)(ADDR), (intptr_t *)(OLDVAL), \
(intptr_t)(NEWVAL), sizeof(*(ADDR)) )
/**
@ -589,7 +597,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (volatile void* a
* See pmix_atomic_compare_exchange_rel_* for pseudo-code.
*/
#define pmix_atomic_compare_exchange_strong_rel( ADDR, OLDVAL, NEWVAL ) \
pmix_atomic_compare_exchange_strong_rel_xx( (volatile void*)(ADDR), (void *)(OLDVAL), \
pmix_atomic_compare_exchange_strong_rel_xx( (pmix_atomic_intptr_t*)(ADDR), (intptr_t *)(OLDVAL), \
(intptr_t)(NEWVAL), sizeof(*(ADDR)) )
@ -597,15 +605,15 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_ptr (volatile void* a
#if defined(DOXYGEN) || (PMIX_HAVE_ATOMIC_MATH_32 || PMIX_HAVE_ATOMIC_MATH_64)
static inline void pmix_atomic_add_xx(volatile void* addr,
static inline void pmix_atomic_add_xx(pmix_atomic_intptr_t* addr,
int32_t value, size_t length);
static inline void pmix_atomic_sub_xx(volatile void* addr,
static inline void pmix_atomic_sub_xx(pmix_atomic_intptr_t* addr,
int32_t value, size_t length);
static inline intptr_t pmix_atomic_add_fetch_ptr( volatile void* addr, void* delta );
static inline intptr_t pmix_atomic_fetch_add_ptr( volatile void* addr, void* delta );
static inline intptr_t pmix_atomic_sub_fetch_ptr( volatile void* addr, void* delta );
static inline intptr_t pmix_atomic_fetch_sub_ptr( volatile void* addr, void* delta );
static inline intptr_t pmix_atomic_add_fetch_ptr( pmix_atomic_intptr_t* addr, void* delta );
static inline intptr_t pmix_atomic_fetch_add_ptr( pmix_atomic_intptr_t* addr, void* delta );
static inline intptr_t pmix_atomic_sub_fetch_ptr( pmix_atomic_intptr_t* addr, void* delta );
static inline intptr_t pmix_atomic_fetch_sub_ptr( pmix_atomic_intptr_t* addr, void* delta );
/**
* Atomically increment the content depending on the type. This
@ -618,7 +626,7 @@ static inline intptr_t pmix_atomic_fetch_sub_ptr( volatile void* addr, void* del
* @param delta Value to add (converted to <TYPE>).
*/
#define pmix_atomic_add( ADDR, VALUE ) \
pmix_atomic_add_xx( (volatile void*)(ADDR), (int32_t)(VALUE), \
pmix_atomic_add_xx( (pmix_atomic_intptr_t*)(ADDR), (int32_t)(VALUE), \
sizeof(*(ADDR)) )
/**
@ -632,7 +640,7 @@ static inline intptr_t pmix_atomic_fetch_sub_ptr( volatile void* addr, void* del
* @param delta Value to substract (converted to <TYPE>).
*/
#define pmix_atomic_sub( ADDR, VALUE ) \
pmix_atomic_sub_xx( (volatile void*)(ADDR), (int32_t)(VALUE), \
pmix_atomic_sub_xx( (pmix_atomic_intptr_t*)(ADDR), (int32_t)(VALUE), \
sizeof(*(ADDR)) )
#endif /* PMIX_HAVE_ATOMIC_MATH_32 || PMIX_HAVE_ATOMIC_MATH_64 */
@ -644,6 +652,8 @@ static inline intptr_t pmix_atomic_fetch_sub_ptr( volatile void* addr, void* del
*/
#include "src/atomics/sys/atomic_impl.h"
#endif /* !PMIX_C_HAVE__ATOMIC */
END_C_DECLS
#endif /* PMIX_SYS_ATOMIC_H */

Просмотреть файл

@ -11,9 +11,9 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2010-2014 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2012-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2012-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -41,7 +41,7 @@
#if PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32
#if !defined(PMIX_HAVE_ATOMIC_MIN_32)
static inline int32_t pmix_atomic_fetch_min_32 (volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_min_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = *addr;
do {
@ -58,7 +58,7 @@ static inline int32_t pmix_atomic_fetch_min_32 (volatile int32_t *addr, int32_t
#endif /* PMIX_HAVE_ATOMIC_MIN_32 */
#if !defined(PMIX_HAVE_ATOMIC_MAX_32)
static inline int32_t pmix_atomic_fetch_max_32 (volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_max_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = *addr;
do {
@ -74,7 +74,7 @@ static inline int32_t pmix_atomic_fetch_max_32 (volatile int32_t *addr, int32_t
#endif /* PMIX_HAVE_ATOMIC_MAX_32 */
#define PMIX_ATOMIC_DEFINE_CMPXCG_OP(type, bits, operation, name) \
static inline type pmix_atomic_fetch_ ## name ## _ ## bits (volatile type *addr, type value) \
static inline type pmix_atomic_fetch_ ## name ## _ ## bits (pmix_atomic_ ## type *addr, type value) \
{ \
type oldval; \
do { \
@ -86,7 +86,7 @@ static inline int32_t pmix_atomic_fetch_max_32 (volatile int32_t *addr, int32_t
#if !defined(PMIX_HAVE_ATOMIC_SWAP_32)
#define PMIX_HAVE_ATOMIC_SWAP_32 1
static inline int32_t pmix_atomic_swap_32(volatile int32_t *addr,
static inline int32_t pmix_atomic_swap_32(pmix_atomic_int32_t *addr,
int32_t newval)
{
int32_t old = *addr;
@ -139,7 +139,7 @@ PMIX_ATOMIC_DEFINE_CMPXCG_OP(int32_t, 32, -, sub)
#if PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64
#if !defined(PMIX_HAVE_ATOMIC_MIN_64)
static inline int64_t pmix_atomic_fetch_min_64 (volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_min_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = *addr;
do {
@ -156,7 +156,7 @@ static inline int64_t pmix_atomic_fetch_min_64 (volatile int64_t *addr, int64_t
#endif /* PMIX_HAVE_ATOMIC_MIN_64 */
#if !defined(PMIX_HAVE_ATOMIC_MAX_64)
static inline int64_t pmix_atomic_fetch_max_64 (volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_max_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = *addr;
do {
@ -173,7 +173,7 @@ static inline int64_t pmix_atomic_fetch_max_64 (volatile int64_t *addr, int64_t
#if !defined(PMIX_HAVE_ATOMIC_SWAP_64)
#define PMIX_HAVE_ATOMIC_SWAP_64 1
static inline int64_t pmix_atomic_swap_64(volatile int64_t *addr,
static inline int64_t pmix_atomic_swap_64(pmix_atomic_int64_t *addr,
int64_t newval)
{
int64_t old = *addr;
@ -236,15 +236,15 @@ PMIX_ATOMIC_DEFINE_CMPXCG_OP(int64_t, 64, -, sub)
#if PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32 && PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64
#define PMIX_ATOMIC_DEFINE_CMPXCG_XX(semantics) \
static inline bool \
pmix_atomic_compare_exchange_strong ## semantics ## xx (volatile void* addr, void *oldval, \
pmix_atomic_compare_exchange_strong ## semantics ## xx (pmix_atomic_intptr_t* addr, intptr_t *oldval, \
int64_t newval, const size_t length) \
{ \
switch (length) { \
case 4: \
return pmix_atomic_compare_exchange_strong_32 ((volatile int32_t *) addr, \
return pmix_atomic_compare_exchange_strong_32 ((pmix_atomic_int32_t *) addr, \
(int32_t *) oldval, (int32_t) newval); \
case 8: \
return pmix_atomic_compare_exchange_strong_64 ((volatile int64_t *) addr, \
return pmix_atomic_compare_exchange_strong_64 ((pmix_atomic_int64_t *) addr, \
(int64_t *) oldval, (int64_t) newval); \
} \
abort(); \
@ -252,12 +252,12 @@ PMIX_ATOMIC_DEFINE_CMPXCG_OP(int64_t, 64, -, sub)
#elif PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32
#define PMIX_ATOMIC_DEFINE_CMPXCG_XX(semantics) \
static inline bool \
pmix_atomic_compare_exchange_strong ## semantics ## xx (volatile void* addr, void *oldval, \
pmix_atomic_compare_exchange_strong ## semantics ## xx (pmix_atomic_intptr_t* addr, intptr_t *oldval, \
int64_t newval, const size_t length) \
{ \
switch (length) { \
case 4: \
return pmix_atomic_compare_exchange_strong_32 ((volatile int32_t *) addr, \
return pmix_atomic_compare_exchange_strong_32 ((pmix_atomic_int32_t *) addr, \
(int32_t *) oldval, (int32_t) newval); \
} \
abort(); \
@ -273,16 +273,16 @@ PMIX_ATOMIC_DEFINE_CMPXCG_XX(_rel_)
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32
#define PMIX_ATOMIC_DEFINE_CMPXCG_PTR_XX(semantics) \
static inline bool \
pmix_atomic_compare_exchange_strong ## semantics ## ptr (volatile void* addr, void *oldval, void *newval) \
pmix_atomic_compare_exchange_strong ## semantics ## ptr (pmix_atomic_intptr_t* addr, intptr_t *oldval, intptr_t newval) \
{ \
return pmix_atomic_compare_exchange_strong_32 ((volatile int32_t *) addr, (int32_t *) oldval, (int32_t) newval); \
return pmix_atomic_compare_exchange_strong_32 ((pmix_atomic_int32_t *) addr, (int32_t *) oldval, (int32_t) newval); \
}
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64
#define PMIX_ATOMIC_DEFINE_CMPXCG_PTR_XX(semantics) \
static inline bool \
pmix_atomic_compare_exchange_strong ## semantics ## ptr (volatile void* addr, void *oldval, void *newval) \
pmix_atomic_compare_exchange_strong ## semantics ## ptr (pmix_atomic_intptr_t* addr, intptr_t *oldval, intptr_t newval) \
{ \
return pmix_atomic_compare_exchange_strong_64 ((volatile int64_t *) addr, (int64_t *) oldval, (int64_t) newval); \
return pmix_atomic_compare_exchange_strong_64 ((pmix_atomic_int64_t *) addr, (int64_t *) oldval, (int64_t) newval); \
}
#else
#error "Can not define pmix_atomic_compare_exchange_strong_ptr with existing atomics"
@ -298,9 +298,9 @@ PMIX_ATOMIC_DEFINE_CMPXCG_PTR_XX(_rel_)
#if (PMIX_HAVE_ATOMIC_SWAP_32 || PMIX_HAVE_ATOMIC_SWAP_64)
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_SWAP_32
#define pmix_atomic_swap_ptr(addr, value) (void *) pmix_atomic_swap_32((int32_t *) addr, (int32_t) value)
#define pmix_atomic_swap_ptr(addr, value) (intptr_t) pmix_atomic_swap_32((pmix_atomic_int32_t *) addr, (int32_t) value)
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_SWAP_64
#define pmix_atomic_swap_ptr(addr, value) (void *) pmix_atomic_swap_64((int64_t *) addr, (int64_t) value)
#define pmix_atomic_swap_ptr(addr, value) (intptr_t) pmix_atomic_swap_64((pmix_atomic_int64_t *) addr, (int64_t) value)
#endif
#endif /* (PMIX_HAVE_ATOMIC_SWAP_32 || PMIX_HAVE_ATOMIC_SWAP_64) */
@ -309,15 +309,15 @@ PMIX_ATOMIC_DEFINE_CMPXCG_PTR_XX(_rel_)
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_LLSC_32
#define pmix_atomic_ll_ptr(addr, ret) pmix_atomic_ll_32((volatile int32_t *) (addr), ret)
#define pmix_atomic_sc_ptr(addr, value, ret) pmix_atomic_sc_32((volatile int32_t *) (addr), (intptr_t) (value), ret)
#define pmix_atomic_ll_ptr(addr, ret) pmix_atomic_ll_32((pmix_atomic_int32_t *) (addr), ret)
#define pmix_atomic_sc_ptr(addr, value, ret) pmix_atomic_sc_32((pmix_atomic_int32_t *) (addr), (intptr_t) (value), ret)
#define PMIX_HAVE_ATOMIC_LLSC_PTR 1
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_LLSC_64
#define pmix_atomic_ll_ptr(addr, ret) pmix_atomic_ll_64((volatile int64_t *) (addr), ret)
#define pmix_atomic_sc_ptr(addr, value, ret) pmix_atomic_sc_64((volatile int64_t *) (addr), (intptr_t) (value), ret)
#define pmix_atomic_ll_ptr(addr, ret) pmix_atomic_ll_64((pmix_atomic_int64_t *) (addr), ret)
#define pmix_atomic_sc_ptr(addr, value, ret) pmix_atomic_sc_64((pmix_atomic_int64_t *) (addr), (intptr_t) (value), ret)
#define PMIX_HAVE_ATOMIC_LLSC_PTR 1
@ -332,18 +332,18 @@ PMIX_ATOMIC_DEFINE_CMPXCG_PTR_XX(_rel_)
#if PMIX_HAVE_ATOMIC_MATH_32 || PMIX_HAVE_ATOMIC_MATH_64
static inline void
pmix_atomic_add_xx(volatile void* addr, int32_t value, size_t length)
pmix_atomic_add_xx(pmix_atomic_intptr_t* addr, int32_t value, size_t length)
{
switch( length ) {
#if PMIX_HAVE_ATOMIC_ADD_32
case 4:
(void) pmix_atomic_fetch_add_32( (volatile int32_t*)addr, (int32_t)value );
(void) pmix_atomic_fetch_add_32( (pmix_atomic_int32_t*)addr, (int32_t)value );
break;
#endif /* PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32 */
#if PMIX_HAVE_ATOMIC_ADD_64
case 8:
(void) pmix_atomic_fetch_add_64( (volatile int64_t*)addr, (int64_t)value );
(void) pmix_atomic_fetch_add_64( (pmix_atomic_int64_t*)addr, (int64_t)value );
break;
#endif /* PMIX_HAVE_ATOMIC_ADD_64 */
default:
@ -355,18 +355,18 @@ static inline void
static inline void
pmix_atomic_sub_xx(volatile void* addr, int32_t value, size_t length)
pmix_atomic_sub_xx(pmix_atomic_intptr_t* addr, int32_t value, size_t length)
{
switch( length ) {
#if PMIX_HAVE_ATOMIC_SUB_32
case 4:
(void) pmix_atomic_fetch_sub_32( (volatile int32_t*)addr, (int32_t)value );
(void) pmix_atomic_fetch_sub_32( (pmix_atomic_int32_t*)addr, (int32_t)value );
break;
#endif /* PMIX_HAVE_ATOMIC_SUB_32 */
#if PMIX_HAVE_ATOMIC_SUB_64
case 8:
(void) pmix_atomic_fetch_sub_64( (volatile int64_t*)addr, (int64_t)value );
(void) pmix_atomic_fetch_sub_64( (pmix_atomic_int64_t*)addr, (int64_t)value );
break;
#endif /* PMIX_HAVE_ATOMIC_SUB_64 */
default:
@ -377,7 +377,7 @@ pmix_atomic_sub_xx(volatile void* addr, int32_t value, size_t length)
}
#define PMIX_ATOMIC_DEFINE_OP_FETCH(op, operation, type, ptr_type, suffix) \
static inline type pmix_atomic_ ## op ## _fetch_ ## suffix (volatile ptr_type *addr, type value) \
static inline type pmix_atomic_ ## op ## _fetch_ ## suffix (pmix_atomic_ ## ptr_type *addr, type value) \
{ \
return pmix_atomic_fetch_ ## op ## _ ## suffix (addr, value) operation value; \
}
@ -388,13 +388,13 @@ PMIX_ATOMIC_DEFINE_OP_FETCH(or, |, int32_t, int32_t, 32)
PMIX_ATOMIC_DEFINE_OP_FETCH(xor, ^, int32_t, int32_t, 32)
PMIX_ATOMIC_DEFINE_OP_FETCH(sub, -, int32_t, int32_t, 32)
static inline int32_t pmix_atomic_min_fetch_32 (volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_min_fetch_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = pmix_atomic_fetch_min_32 (addr, value);
return old <= value ? old : value;
}
static inline int32_t pmix_atomic_max_fetch_32 (volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_max_fetch_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = pmix_atomic_fetch_max_32 (addr, value);
return old >= value ? old : value;
@ -407,13 +407,13 @@ PMIX_ATOMIC_DEFINE_OP_FETCH(or, |, int64_t, int64_t, 64)
PMIX_ATOMIC_DEFINE_OP_FETCH(xor, ^, int64_t, int64_t, 64)
PMIX_ATOMIC_DEFINE_OP_FETCH(sub, -, int64_t, int64_t, 64)
static inline int64_t pmix_atomic_min_fetch_64 (volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_min_fetch_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = pmix_atomic_fetch_min_64 (addr, value);
return old <= value ? old : value;
}
static inline int64_t pmix_atomic_max_fetch_64 (volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_max_fetch_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = pmix_atomic_fetch_max_64 (addr, value);
return old >= value ? old : value;
@ -421,52 +421,52 @@ static inline int64_t pmix_atomic_max_fetch_64 (volatile int64_t *addr, int64_t
#endif
static inline intptr_t pmix_atomic_fetch_add_ptr( volatile void* addr,
static inline intptr_t pmix_atomic_fetch_add_ptr( pmix_atomic_intptr_t* addr,
void* delta )
{
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_ADD_32
return pmix_atomic_fetch_add_32((int32_t*) addr, (unsigned long) delta);
return pmix_atomic_fetch_add_32((pmix_atomic_int32_t*) addr, (unsigned long) delta);
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_ADD_64
return pmix_atomic_fetch_add_64((int64_t*) addr, (unsigned long) delta);
return pmix_atomic_fetch_add_64((pmix_atomic_int64_t*) addr, (unsigned long) delta);
#else
abort ();
return 0;
#endif
}
static inline intptr_t pmix_atomic_add_fetch_ptr( volatile void* addr,
static inline intptr_t pmix_atomic_add_fetch_ptr( pmix_atomic_intptr_t* addr,
void* delta )
{
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_ADD_32
return pmix_atomic_add_fetch_32((int32_t*) addr, (unsigned long) delta);
return pmix_atomic_add_fetch_32((pmix_atomic_int32_t*) addr, (unsigned long) delta);
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_ADD_64
return pmix_atomic_add_fetch_64((int64_t*) addr, (unsigned long) delta);
return pmix_atomic_add_fetch_64((pmix_atomic_int64_t*) addr, (unsigned long) delta);
#else
abort ();
return 0;
#endif
}
static inline intptr_t pmix_atomic_fetch_sub_ptr( volatile void* addr,
static inline intptr_t pmix_atomic_fetch_sub_ptr( pmix_atomic_intptr_t* addr,
void* delta )
{
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_SUB_32
return pmix_atomic_fetch_sub_32((int32_t*) addr, (unsigned long) delta);
return pmix_atomic_fetch_sub_32((pmix_atomic_int32_t*) addr, (unsigned long) delta);
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_SUB_32
return pmix_atomic_fetch_sub_64((int64_t*) addr, (unsigned long) delta);
return pmix_atomic_fetch_sub_64((pmix_atomic_int64_t*) addr, (unsigned long) delta);
#else
abort();
return 0;
#endif
}
static inline intptr_t pmix_atomic_sub_fetch_ptr( volatile void* addr,
static inline intptr_t pmix_atomic_sub_fetch_ptr( pmix_atomic_intptr_t* addr,
void* delta )
{
#if SIZEOF_VOID_P == 4 && PMIX_HAVE_ATOMIC_SUB_32
return pmix_atomic_sub_fetch_32((int32_t*) addr, (unsigned long) delta);
return pmix_atomic_sub_fetch_32((pmix_atomic_int32_t*) addr, (unsigned long) delta);
#elif SIZEOF_VOID_P == 8 && PMIX_HAVE_ATOMIC_SUB_32
return pmix_atomic_sub_fetch_64((int64_t*) addr, (unsigned long) delta);
return pmix_atomic_sub_fetch_64((pmix_atomic_int64_t*) addr, (unsigned long) delta);
#else
abort();
return 0;

Просмотреть файл

@ -0,0 +1,262 @@
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
/*
* Copyright (c) 2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
/* This file provides shims between the pmix atomics interface and the C11 atomics interface. It
* is intended as the first step in moving to using C11 atomics across the entire codebase. Once
* all officially supported compilers offer C11 atomic (GCC 4.9.0+, icc 2018+, pgi, xlc, etc) then
* this shim will go away and the codebase will be updated to use C11's atomic support
* directly.
* This shim contains some functions already present in atomic_impl.h because we do not include
* atomic_impl.h when using C11 atomics. It would require alot of #ifdefs to avoid duplicate
* definitions to be worthwhile. */
#if !defined(PMIX_ATOMIC_STDC_H)
#define PMIX_ATOMIC_STDC_H
#include <stdatomic.h>
#include <stdint.h>
#include "src/include/pmix_stdint.h"
#define PMIX_HAVE_ATOMIC_MEM_BARRIER 1
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32 1
#define PMIX_HAVE_ATOMIC_SWAP_32 1
#define PMIX_HAVE_ATOMIC_MATH_32 1
#define PMIX_HAVE_ATOMIC_ADD_32 1
#define PMIX_HAVE_ATOMIC_AND_32 1
#define PMIX_HAVE_ATOMIC_OR_32 1
#define PMIX_HAVE_ATOMIC_XOR_32 1
#define PMIX_HAVE_ATOMIC_SUB_32 1
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64 1
#define PMIX_HAVE_ATOMIC_SWAP_64 1
#define PMIX_HAVE_ATOMIC_MATH_64 1
#define PMIX_HAVE_ATOMIC_ADD_64 1
#define PMIX_HAVE_ATOMIC_AND_64 1
#define PMIX_HAVE_ATOMIC_OR_64 1
#define PMIX_HAVE_ATOMIC_XOR_64 1
#define PMIX_HAVE_ATOMIC_SUB_64 1
#define PMIX_HAVE_ATOMIC_LLSC_32 0
#define PMIX_HAVE_ATOMIC_LLSC_64 0
#define PMIX_HAVE_ATOMIC_LLSC_PTR 0
#define PMIX_HAVE_ATOMIC_MIN_32 1
#define PMIX_HAVE_ATOMIC_MAX_32 1
#define PMIX_HAVE_ATOMIC_MIN_64 1
#define PMIX_HAVE_ATOMIC_MAX_64 1
#define PMIX_HAVE_ATOMIC_SPINLOCKS 1
static inline void pmix_atomic_mb (void)
{
atomic_thread_fence (memory_order_seq_cst);
}
static inline void pmix_atomic_wmb (void)
{
atomic_thread_fence (memory_order_release);
}
static inline void pmix_atomic_rmb (void)
{
atomic_thread_fence (memory_order_acquire);
}
#define pmix_atomic_compare_exchange_strong_32(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_relaxed, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_64(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_relaxed, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_ptr(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_relaxed, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_acq_32(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_acquire, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_acq_64(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_acquire, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_acq_ptr(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_acquire, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_rel_32(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_release, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_rel_64(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_release, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_rel_ptr(addr, compare, value) atomic_compare_exchange_strong_explicit (addr, compare, value, memory_order_release, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong(addr, oldval, newval) atomic_compare_exchange_strong_explicit (addr, oldval, newval, memory_order_relaxed, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_acq(addr, oldval, newval) atomic_compare_exchange_strong_explicit (addr, oldval, newval, memory_order_acquire, memory_order_relaxed)
#define pmix_atomic_compare_exchange_strong_rel(addr, oldval, newval) atomic_compare_exchange_strong_explicit (addr, oldval, newval, memory_order_release, memory_order_relaxed)
#define pmix_atomic_swap_32(addr, value) atomic_exchange_explicit (addr, value, memory_order_relaxed)
#define pmix_atomic_swap_64(addr, value) atomic_exchange_explicit (addr, value, memory_order_relaxed)
#define pmix_atomic_swap_ptr(addr, value) atomic_exchange_explicit (addr, value, memory_order_relaxed)
#define PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(op, bits, type, operator) \
static inline type pmix_atomic_fetch_ ## op ##_## bits (pmix_atomic_ ## type *addr, type value) \
{ \
return atomic_fetch_ ## op ## _explicit (addr, value, memory_order_relaxed); \
} \
\
static inline type pmix_atomic_## op ## _fetch_ ## bits (pmix_atomic_ ## type *addr, type value) \
{ \
return atomic_fetch_ ## op ## _explicit (addr, value, memory_order_relaxed) operator value; \
}
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(add, 32, int32_t, +)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(add, 64, int64_t, +)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(add, size_t, size_t, +)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(sub, 32, int32_t, -)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(sub, 64, int64_t, -)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(sub, size_t, size_t, -)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(or, 32, int32_t, |)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(or, 64, int64_t, |)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(xor, 32, int32_t, ^)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(xor, 64, int64_t, ^)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(and, 32, int32_t, &)
PMIX_ATOMIC_STDC_DEFINE_FETCH_OP(and, 64, int64_t, &)
#define pmix_atomic_add(addr, value) (void) atomic_fetch_add_explicit (addr, value, memory_order_relaxed)
static inline int32_t pmix_atomic_fetch_min_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = *addr;
do {
if (old <= value) {
break;
}
} while (!pmix_atomic_compare_exchange_strong_32 (addr, &old, value));
return old;
}
static inline int32_t pmix_atomic_fetch_max_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = *addr;
do {
if (old >= value) {
break;
}
} while (!pmix_atomic_compare_exchange_strong_32 (addr, &old, value));
return old;
}
static inline int64_t pmix_atomic_fetch_min_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = *addr;
do {
if (old <= value) {
break;
}
} while (!pmix_atomic_compare_exchange_strong_64 (addr, &old, value));
return old;
}
static inline int64_t pmix_atomic_fetch_max_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = *addr;
do {
if (old >= value) {
break;
}
} while (!pmix_atomic_compare_exchange_strong_64 (addr, &old, value));
return old;
}
static inline int32_t pmix_atomic_min_fetch_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = pmix_atomic_fetch_min_32 (addr, value);
return old <= value ? old : value;
}
static inline int32_t pmix_atomic_max_fetch_32 (pmix_atomic_int32_t *addr, int32_t value)
{
int32_t old = pmix_atomic_fetch_max_32 (addr, value);
return old >= value ? old : value;
}
static inline int64_t pmix_atomic_min_fetch_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = pmix_atomic_fetch_min_64 (addr, value);
return old <= value ? old : value;
}
static inline int64_t pmix_atomic_max_fetch_64 (pmix_atomic_int64_t *addr, int64_t value)
{
int64_t old = pmix_atomic_fetch_max_64 (addr, value);
return old >= value ? old : value;
}
#define PMIX_ATOMIC_LOCK_UNLOCKED false
#define PMIX_ATOMIC_LOCK_LOCKED true
#define PMIX_ATOMIC_LOCK_INIT ATOMIC_FLAG_INIT
typedef atomic_flag pmix_atomic_lock_t;
/*
* Lock initialization function. It set the lock to UNLOCKED.
*/
static inline void pmix_atomic_lock_init (pmix_atomic_lock_t *lock, bool value)
{
atomic_flag_clear (lock);
}
static inline int pmix_atomic_trylock (pmix_atomic_lock_t *lock)
{
return (int) atomic_flag_test_and_set (lock);
}
static inline void pmix_atomic_lock(pmix_atomic_lock_t *lock)
{
while (pmix_atomic_trylock (lock)) {
}
}
static inline void pmix_atomic_unlock (pmix_atomic_lock_t *lock)
{
atomic_flag_clear (lock);
}
#if PMIX_HAVE_C11_CSWAP_INT128
/* the C11 atomic compare-exchange is lock free so use it */
#define pmix_atomic_compare_exchange_strong_128 atomic_compare_exchange_strong
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_128 1
#elif PMIX_HAVE_SYNC_BUILTIN_CSWAP_INT128
/* fall back on the __sync builtin if available since it will emit the expected instruction on x86_64 (cmpxchng16b) */
__pmix_attribute_always_inline__
static inline bool pmix_atomic_compare_exchange_strong_128 (pmix_atomic_int128_t *addr,
pmix_int128_t *oldval, pmix_int128_t newval)
{
pmix_int128_t prev = __sync_val_compare_and_swap (addr, *oldval, newval);
bool ret = prev == *oldval;
*oldval = prev;
return ret;
}
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_128 1
#else
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_128 0
#endif
#endif /* !defined(PMIX_ATOMIC_STDC_H) */

Просмотреть файл

@ -4,7 +4,7 @@
* reserved.
* Copyright (c) 2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*/
@ -85,13 +85,13 @@
#elif PMIX_ASSEMBLY_ARCH == PMIX_S390
#define __NR_process_vm_readv 340
#define __NR_process_vm_writev 341
#define __NR_process_vm_readv 340
#define __NR_process_vm_writev 341
#elif PMIX_ASSEMBLY_ARCH == PMIX_S390X
#define __NR_process_vm_readv 340
#define __NR_process_vm_writev 341
#define __NR_process_vm_readv 340
#define __NR_process_vm_writev 341
#else
#error "Unsupported architecture for process_vm_readv and process_vm_writev syscalls"

Просмотреть файл

@ -12,7 +12,7 @@
# Copyright (c) 2011 Sandia National Laboratories. All rights reserved.
# Copyright (c) 2016 Los Alamos National Security, LLC. All rights
# reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -11,11 +11,13 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2011 Sandia National Laboratories. All rights reserved.
* Copyright (c) 2014-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2014-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2016-2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Triad National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -58,7 +60,14 @@ static inline void pmix_atomic_mb(void)
static inline void pmix_atomic_rmb(void)
{
#if PMIX_ASSEMBLY_ARCH == PMIX_X86_64
/* work around a bug in older gcc versions where ACQUIRE seems to get
* treated as a no-op instead of being equivalent to
* __asm__ __volatile__("": : :"memory") */
__atomic_thread_fence (__ATOMIC_SEQ_CST);
#else
__atomic_thread_fence (__ATOMIC_ACQUIRE);
#endif
}
static inline void pmix_atomic_wmb(void)
@ -77,103 +86,103 @@ static inline void pmix_atomic_wmb(void)
/*
* Suppress numerous (spurious ?) warnings from Oracle Studio compilers
* see https://community.oracle.com/thread/3968347
*/
*/
#if defined(__SUNPRO_C) || defined(__SUNPRO_CC)
#pragma error_messages(off, E_ARG_INCOMPATIBLE_WITH_ARG_L)
#endif
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
}
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_RELEASE, __ATOMIC_RELAXED);
}
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
}
static inline int32_t pmix_atomic_swap_32 (volatile int32_t *addr, int32_t newval)
static inline int32_t pmix_atomic_swap_32 (pmix_atomic_int32_t *addr, int32_t newval)
{
int32_t oldval;
__atomic_exchange (addr, &newval, &oldval, __ATOMIC_RELAXED);
return oldval;
}
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t *addr, int32_t delta)
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t *addr, int32_t delta)
{
return __atomic_fetch_add (addr, delta, __ATOMIC_RELAXED);
}
static inline int32_t pmix_atomic_fetch_and_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_and_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __atomic_fetch_and (addr, value, __ATOMIC_RELAXED);
}
static inline int32_t pmix_atomic_fetch_or_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_or_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __atomic_fetch_or (addr, value, __ATOMIC_RELAXED);
}
static inline int32_t pmix_atomic_fetch_xor_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_xor_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __atomic_fetch_xor (addr, value, __ATOMIC_RELAXED);
}
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t *addr, int32_t delta)
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t *addr, int32_t delta)
{
return __atomic_fetch_sub (addr, delta, __ATOMIC_RELAXED);
}
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
}
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_RELEASE, __ATOMIC_RELAXED);
}
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
}
static inline int64_t pmix_atomic_swap_64 (volatile int64_t *addr, int64_t newval)
static inline int64_t pmix_atomic_swap_64 (pmix_atomic_int64_t *addr, int64_t newval)
{
int64_t oldval;
__atomic_exchange (addr, &newval, &oldval, __ATOMIC_RELAXED);
return oldval;
}
static inline int64_t pmix_atomic_fetch_add_64(volatile int64_t *addr, int64_t delta)
static inline int64_t pmix_atomic_fetch_add_64(pmix_atomic_int64_t *addr, int64_t delta)
{
return __atomic_fetch_add (addr, delta, __ATOMIC_RELAXED);
}
static inline int64_t pmix_atomic_fetch_and_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_and_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __atomic_fetch_and (addr, value, __ATOMIC_RELAXED);
}
static inline int64_t pmix_atomic_fetch_or_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_or_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __atomic_fetch_or (addr, value, __ATOMIC_RELAXED);
}
static inline int64_t pmix_atomic_fetch_xor_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_xor_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __atomic_fetch_xor (addr, value, __ATOMIC_RELAXED);
}
static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t *addr, int64_t delta)
static inline int64_t pmix_atomic_fetch_sub_64(pmix_atomic_int64_t *addr, int64_t delta)
{
return __atomic_fetch_sub (addr, delta, __ATOMIC_RELAXED);
}
@ -182,7 +191,7 @@ static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t *addr, int64_t d
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_128 1
static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128_t *addr,
static inline bool pmix_atomic_compare_exchange_strong_128 (pmix_atomic_int128_t *addr,
pmix_int128_t *oldval, pmix_int128_t newval)
{
return __atomic_compare_exchange_n (addr, oldval, newval, false,
@ -195,7 +204,7 @@ static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128
/* __atomic version is not lock-free so use legacy __sync version */
static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128_t *addr,
static inline bool pmix_atomic_compare_exchange_strong_128 (pmix_atomic_pmix_int128_t *addr,
pmix_int128_t *oldval, pmix_int128_t newval)
{
pmix_int128_t prev = __sync_val_compare_and_swap (addr, *oldval, newval);

Просмотреть файл

@ -9,7 +9,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -13,9 +13,9 @@
* Copyright (c) 2007-2010 Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2015-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2015-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -85,7 +85,7 @@ static inline void pmix_atomic_isync(void)
*********************************************************************/
#if PMIX_GCC_INLINE_ASSEMBLY
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
unsigned char ret;
__asm__ __volatile__ (
@ -107,15 +107,15 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
#define PMIX_HAVE_ATOMIC_SWAP_32 1
static inline int32_t pmix_atomic_swap_32( volatile int32_t *addr,
int32_t newval)
static inline int32_t pmix_atomic_swap_32( pmix_atomic_int32_t *addr,
int32_t newval)
{
int32_t oldval;
__asm__ __volatile__("xchg %1, %0" :
"=r" (oldval), "=m" (*addr) :
"0" (newval), "m" (*addr) :
"memory");
"=r" (oldval), "=m" (*addr) :
"0" (newval), "m" (*addr) :
"memory");
return oldval;
}
@ -131,7 +131,7 @@ static inline int32_t pmix_atomic_swap_32( volatile int32_t *addr,
*
* Atomically adds @i to @v.
*/
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int i)
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t* v, int i)
{
int ret = i;
__asm__ __volatile__(
@ -151,7 +151,7 @@ static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int i)
*
* Atomically subtracts @i from @v.
*/
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t* v, int i)
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t* v, int i)
{
int ret = -i;
__asm__ __volatile__(

Просмотреть файл

@ -9,7 +9,6 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -9,7 +9,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -13,7 +13,7 @@
* Copyright (c) 2010-2017 IBM Corporation. All rights reserved.
* Copyright (c) 2015-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -145,7 +145,7 @@ void pmix_atomic_isync(void)
#define PMIX_ASM_VALUE64(x) x
#endif
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev;
bool ret;
@ -171,7 +171,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
* load the arguments to/from the stack. This sequence may cause the ll reservation to be cancelled. */
#define pmix_atomic_ll_32(addr, ret) \
do { \
volatile int32_t *_addr = (addr); \
pmix_atomic_int32_t *_addr = (addr); \
int32_t _ret; \
__asm__ __volatile__ ("lwarx %0, 0, %1 \n\t" \
: "=&r" (_ret) \
@ -182,7 +182,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
#define pmix_atomic_sc_32(addr, value, ret) \
do { \
volatile int32_t *_addr = (addr); \
pmix_atomic_int32_t *_addr = (addr); \
int32_t _ret, _foo, _newval = (int32_t) value; \
\
__asm__ __volatile__ (" stwcx. %4, 0, %3 \n\t" \
@ -201,7 +201,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
bool rc;
@ -212,13 +212,13 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_32 (addr, oldval, newval);
}
static inline int32_t pmix_atomic_swap_32(volatile int32_t *addr, int32_t newval)
static inline int32_t pmix_atomic_swap_32(pmix_atomic_int32_t *addr, int32_t newval)
{
int32_t ret;
@ -240,7 +240,7 @@ static inline int32_t pmix_atomic_swap_32(volatile int32_t *addr, int32_t newval
#if PMIX_GCC_INLINE_ASSEMBLY
#define PMIX_ATOMIC_POWERPC_DEFINE_ATOMIC_64(type, instr) \
static inline int64_t pmix_atomic_fetch_ ## type ## _64(volatile int64_t* v, int64_t val) \
static inline int64_t pmix_atomic_fetch_ ## type ## _64(pmix_atomic_int64_t* v, int64_t val) \
{ \
int64_t t, old; \
\
@ -262,7 +262,7 @@ PMIX_ATOMIC_POWERPC_DEFINE_ATOMIC_64(or, or)
PMIX_ATOMIC_POWERPC_DEFINE_ATOMIC_64(xor, xor)
PMIX_ATOMIC_POWERPC_DEFINE_ATOMIC_64(sub, subf)
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
bool ret;
@ -285,7 +285,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#define pmix_atomic_ll_64(addr, ret) \
do { \
volatile int64_t *_addr = (addr); \
pmix_atomic_int64_t *_addr = (addr); \
int64_t _ret; \
__asm__ __volatile__ ("ldarx %0, 0, %1 \n\t" \
: "=&r" (_ret) \
@ -296,7 +296,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#define pmix_atomic_sc_64(addr, value, ret) \
do { \
volatile int64_t *_addr = (addr); \
pmix_atomic_int64_t *_addr = (addr); \
int64_t _foo, _newval = (int64_t) value; \
int32_t _ret; \
\
@ -311,7 +311,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
ret = _ret; \
} while (0)
static inline int64_t pmix_atomic_swap_64(volatile int64_t *addr, int64_t newval)
static inline int64_t pmix_atomic_swap_64(pmix_atomic_int64_t *addr, int64_t newval)
{
int64_t ret;
@ -336,7 +336,7 @@ static inline int64_t pmix_atomic_swap_64(volatile int64_t *addr, int64_t newval
#if PMIX_GCC_INLINE_ASSEMBLY
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev;
int ret;
@ -383,7 +383,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
atomic_?mb can be inlined). Instead, we "inline" them by hand in
the assembly, meaning there is one function call overhead instead
of two */
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
bool rc;
@ -394,7 +394,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_64 (addr, oldval, newval);
@ -402,7 +402,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t
#define PMIX_ATOMIC_POWERPC_DEFINE_ATOMIC_32(type, instr) \
static inline int32_t pmix_atomic_fetch_ ## type ## _32(volatile int32_t* v, int val) \
static inline int32_t pmix_atomic_fetch_ ## type ## _32(pmix_atomic_int32_t* v, int val) \
{ \
int32_t t, old; \
\

Просмотреть файл

@ -9,7 +9,6 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -9,7 +9,7 @@
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -13,9 +13,9 @@
* Copyright (c) 2007 Sun Microsystems, Inc. All rights reserverd.
* Copyright (c) 2016 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2017-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -32,7 +32,7 @@
#define ASI_P "0x80"
#define MEMBAR(type) __asm__ __volatile__ ("membar " type : : : "memory")
#define MEPMIXMBAR(type) __asm__ __volatile__ ("membar " type : : : "memory")
/**********************************************************************
@ -56,19 +56,19 @@
static inline void pmix_atomic_mb(void)
{
MEMBAR("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad");
MEPMIXMBAR("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad");
}
static inline void pmix_atomic_rmb(void)
{
MEMBAR("#LoadLoad");
MEPMIXMBAR("#LoadLoad");
}
static inline void pmix_atomic_wmb(void)
{
MEMBAR("#StoreStore");
MEPMIXMBAR("#StoreStore");
}
static inline void pmix_atomic_isync(void)
@ -86,7 +86,7 @@ static inline void pmix_atomic_isync(void)
*********************************************************************/
#if PMIX_GCC_INLINE_ASSEMBLY
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
/* casa [reg(rs1)] %asi, reg(rs2), reg(rd)
*
@ -108,7 +108,7 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
}
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
bool rc;
@ -119,7 +119,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_32 (volatile int32_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_32 (addr, oldval, newval);
@ -128,7 +128,7 @@ static inline bool pmix_atomic_compare_exchange_strong_rel_32 (volatile int32_t
#if PMIX_ASSEMBLY_ARCH == PMIX_SPARCV9_64
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
/* casa [reg(rs1)] %asi, reg(rs2), reg(rd)
*
@ -150,7 +150,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#else /* PMIX_ASSEMBLY_ARCH == PMIX_SPARCV9_64 */
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
/* casa [reg(rs1)] %asi, reg(rs2), reg(rd)
*
@ -180,7 +180,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#endif /* PMIX_ASSEMBLY_ARCH == PMIX_SPARCV9_64 */
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_acq_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
bool rc;
@ -191,7 +191,7 @@ static inline bool pmix_atomic_compare_exchange_strong_acq_64 (volatile int64_t
}
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_rel_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
pmix_atomic_wmb();
return pmix_atomic_compare_exchange_strong_64 (addr, oldval, newval);

Просмотреть файл

@ -9,7 +9,6 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -10,7 +10,7 @@
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2011 Sandia National Laboratories. All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -11,11 +11,11 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2011 Sandia National Laboratories. All rights reserved.
* Copyright (c) 2014-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2014-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -58,7 +58,7 @@ static inline void pmix_atomic_wmb(void)
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_32 1
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
int32_t prev = __sync_val_compare_and_swap (addr, *oldval, newval);
bool ret = prev == *oldval;
@ -72,31 +72,31 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
#define PMIX_HAVE_ATOMIC_MATH_32 1
#define PMIX_HAVE_ATOMIC_ADD_32 1
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t *addr, int32_t delta)
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t *addr, int32_t delta)
{
return __sync_fetch_and_add(addr, delta);
}
#define PMIX_HAVE_ATOMIC_AND_32 1
static inline int32_t pmix_atomic_fetch_and_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_and_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __sync_fetch_and_and(addr, value);
}
#define PMIX_HAVE_ATOMIC_OR_32 1
static inline int32_t pmix_atomic_fetch_or_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_or_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __sync_fetch_and_or(addr, value);
}
#define PMIX_HAVE_ATOMIC_XOR_32 1
static inline int32_t pmix_atomic_fetch_xor_32(volatile int32_t *addr, int32_t value)
static inline int32_t pmix_atomic_fetch_xor_32(pmix_atomic_int32_t *addr, int32_t value)
{
return __sync_fetch_and_xor(addr, value);
}
#define PMIX_HAVE_ATOMIC_SUB_32 1
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t *addr, int32_t delta)
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t *addr, int32_t delta)
{
return __sync_fetch_and_sub(addr, delta);
}
@ -105,7 +105,7 @@ static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t *addr, int32_t d
#define PMIX_HAVE_ATOMIC_COMPARE_EXCHANGE_64 1
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
int64_t prev = __sync_val_compare_and_swap (addr, *oldval, newval);
bool ret = prev == *oldval;
@ -118,31 +118,31 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#define PMIX_HAVE_ATOMIC_MATH_64 1
#define PMIX_HAVE_ATOMIC_ADD_64 1
static inline int64_t pmix_atomic_fetch_add_64(volatile int64_t *addr, int64_t delta)
static inline int64_t pmix_atomic_fetch_add_64(pmix_atomic_int64_t *addr, int64_t delta)
{
return __sync_fetch_and_add(addr, delta);
}
#define PMIX_HAVE_ATOMIC_AND_64 1
static inline int64_t pmix_atomic_fetch_and_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_and_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __sync_fetch_and_and(addr, value);
}
#define PMIX_HAVE_ATOMIC_OR_64 1
static inline int64_t pmix_atomic_fetch_or_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_or_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __sync_fetch_and_or(addr, value);
}
#define PMIX_HAVE_ATOMIC_XOR_64 1
static inline int64_t pmix_atomic_fetch_xor_64(volatile int64_t *addr, int64_t value)
static inline int64_t pmix_atomic_fetch_xor_64(pmix_atomic_int64_t *addr, int64_t value)
{
return __sync_fetch_and_xor(addr, value);
}
#define PMIX_HAVE_ATOMIC_SUB_64 1
static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t *addr, int64_t delta)
static inline int64_t pmix_atomic_fetch_sub_64(pmix_atomic_int64_t *addr, int64_t delta)
{
return __sync_fetch_and_sub(addr, delta);
}
@ -150,7 +150,7 @@ static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t *addr, int64_t d
#endif
#if PMIX_HAVE_SYNC_BUILTIN_CSWAP_INT128
static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128_t *addr,
static inline bool pmix_atomic_compare_exchange_strong_128 (pmix_atomic_int128_t *addr,
pmix_int128_t *oldval, pmix_int128_t newval)
{
pmix_int128_t prev = __sync_val_compare_and_swap (addr, *oldval, newval);

Просмотреть файл

@ -13,7 +13,7 @@
* Copyright (c) 2016 Broadcom Limited. All rights reserved.
* Copyright (c) 2016-2017 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -11,7 +11,7 @@
# All rights reserved.
# Copyright (c) 2017 Research Organization for Information Science
# and Technology (RIST). All rights reserved.
# Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
# Copyright (c) 2017 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow

Просмотреть файл

@ -11,11 +11,11 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2007 Sun Microsystems, Inc. All rights reserverd.
* Copyright (c) 2012-2017 Los Alamos National Security, LLC. All rights
* Copyright (c) 2012-2018 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2016-2017 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -83,7 +83,7 @@ static inline void pmix_atomic_isync(void)
*********************************************************************/
#if PMIX_GCC_INLINE_ASSEMBLY
static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *addr, int32_t *oldval, int32_t newval)
static inline bool pmix_atomic_compare_exchange_strong_32 (pmix_atomic_int32_t *addr, int32_t *oldval, int32_t newval)
{
unsigned char ret;
__asm__ __volatile__ (
@ -103,13 +103,13 @@ static inline bool pmix_atomic_compare_exchange_strong_32 (volatile int32_t *add
#if PMIX_GCC_INLINE_ASSEMBLY
static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *addr, int64_t *oldval, int64_t newval)
static inline bool pmix_atomic_compare_exchange_strong_64 (pmix_atomic_int64_t *addr, int64_t *oldval, int64_t newval)
{
unsigned char ret;
__asm__ __volatile__ (
SMPLOCK "cmpxchgq %3,%2 \n\t"
"sete %0 \n\t"
: "=qm" (ret), "+a" (*oldval), "+m" (*((volatile long*)addr))
: "=qm" (ret), "+a" (*oldval), "+m" (*((pmix_atomic_long_t *)addr))
: "q"(newval)
: "memory", "cc"
);
@ -124,7 +124,7 @@ static inline bool pmix_atomic_compare_exchange_strong_64 (volatile int64_t *add
#if PMIX_GCC_INLINE_ASSEMBLY && PMIX_HAVE_CMPXCHG16B && HAVE_PMIX_INT128_T
static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128_t *addr, pmix_int128_t *oldval, pmix_int128_t newval)
static inline bool pmix_atomic_compare_exchange_strong_128 (pmix_atomic_int128_t *addr, pmix_int128_t *oldval, pmix_int128_t newval)
{
unsigned char ret;
@ -151,15 +151,15 @@ static inline bool pmix_atomic_compare_exchange_strong_128 (volatile pmix_int128
#define PMIX_HAVE_ATOMIC_SWAP_64 1
static inline int32_t pmix_atomic_swap_32( volatile int32_t *addr,
int32_t newval)
static inline int32_t pmix_atomic_swap_32( pmix_atomic_int32_t *addr,
int32_t newval)
{
int32_t oldval;
__asm__ __volatile__("xchg %1, %0" :
"=r" (oldval), "+m" (*addr) :
"0" (newval) :
"memory");
"=r" (oldval), "+m" (*addr) :
"0" (newval) :
"memory");
return oldval;
}
@ -167,15 +167,15 @@ static inline int32_t pmix_atomic_swap_32( volatile int32_t *addr,
#if PMIX_GCC_INLINE_ASSEMBLY
static inline int64_t pmix_atomic_swap_64( volatile int64_t *addr,
static inline int64_t pmix_atomic_swap_64( pmix_atomic_int64_t *addr,
int64_t newval)
{
int64_t oldval;
__asm__ __volatile__("xchgq %1, %0" :
"=r" (oldval), "+m" (*addr) :
"0" (newval) :
"memory");
"=r" (oldval), "+m" (*addr) :
"0" (newval) :
"memory");
return oldval;
}
@ -197,7 +197,7 @@ static inline int64_t pmix_atomic_swap_64( volatile int64_t *addr,
*
* Atomically adds @i to @v.
*/
static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int i)
static inline int32_t pmix_atomic_fetch_add_32(pmix_atomic_int32_t* v, int i)
{
int ret = i;
__asm__ __volatile__(
@ -218,7 +218,7 @@ static inline int32_t pmix_atomic_fetch_add_32(volatile int32_t* v, int i)
*
* Atomically adds @i to @v.
*/
static inline int64_t pmix_atomic_fetch_add_64(volatile int64_t* v, int64_t i)
static inline int64_t pmix_atomic_fetch_add_64(pmix_atomic_int64_t* v, int64_t i)
{
int64_t ret = i;
__asm__ __volatile__(
@ -239,7 +239,7 @@ static inline int64_t pmix_atomic_fetch_add_64(volatile int64_t* v, int64_t i)
*
* Atomically subtracts @i from @v.
*/
static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t* v, int i)
static inline int32_t pmix_atomic_fetch_sub_32(pmix_atomic_int32_t* v, int i)
{
int ret = -i;
__asm__ __volatile__(
@ -260,7 +260,7 @@ static inline int32_t pmix_atomic_fetch_sub_32(volatile int32_t* v, int i)
*
* Atomically subtracts @i from @v.
*/
static inline int64_t pmix_atomic_fetch_sub_64(volatile int64_t* v, int64_t i)
static inline int64_t pmix_atomic_fetch_sub_64(pmix_atomic_int64_t* v, int64_t i)
{
int64_t ret = -i;
__asm__ __volatile__(

Просмотреть файл

@ -12,7 +12,7 @@
* All rights reserved.
* Copyright (c) 2016 Los Alamos National Security, LLC. ALl rights
* reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* Copyright (c) 2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -11,7 +11,7 @@
* All rights reserved.
* Copyright (c) 2014-2015 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2015 Intel, Inc. All rights reserved
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -9,7 +9,7 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2015-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2015-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2015-2016 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* Copyright (c) 2016 Mellanox Technologies, Inc.

Просмотреть файл

@ -11,7 +11,7 @@
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2007 Voltaire All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2015 Intel, Inc. All rights reserved
* $COPYRIGHT$
*
* Additional copyrights may follow

Просмотреть файл

@ -13,7 +13,7 @@
* Copyright (c) 2007 Voltaire All rights reserved.
* Copyright (c) 2013 Los Alamos National Security, LLC. All rights
* reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2013-2018 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
@ -109,7 +109,7 @@ struct pmix_list_item_t
#if PMIX_ENABLE_DEBUG
/** Atomic reference count for debugging */
volatile int32_t pmix_list_item_refcount;
pmix_atomic_int32_t pmix_list_item_refcount;
/** The list this item belong to */
volatile struct pmix_list_t* pmix_list_item_belong_to;
#endif

Просмотреть файл

@ -9,7 +9,7 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2014-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2014-2017 Intel, Inc. All rights reserved.
* Copyright (c) 2016 Research Organization for Information Science
* and Technology (RIST). All rights reserved.
* $COPYRIGHT$

Просмотреть файл

@ -192,7 +192,7 @@ struct pmix_object_t {
uint64_t obj_magic_id;
#endif
pmix_class_t *obj_class; /**< class descriptor */
volatile int32_t obj_reference_count; /**< reference count */
pmix_atomic_int32_t obj_reference_count; /**< reference count */
#if PMIX_ENABLE_DEBUG
const char* cls_init_file_name; /**< In debug mode store the file where the object get contructed */
int cls_init_lineno; /**< In debug mode store the line number where the object get contructed */

Просмотреть файл

@ -10,7 +10,7 @@
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2017-2018 Intel, Inc. All rights reserved.
* Copyright (c) 2017 Intel, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше