1
1

Merge pull request #5606 from hppritcha/topic/sync_news_for_4.0.x

NEWS: sync 4.0.x NEWS with 3.1.x
Этот коммит содержится в:
Geoff Paulsen 2018-08-31 13:57:48 -05:00 коммит произвёл GitHub
родитель d364553667 221fc3ec66
Коммит 118f61c928
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23

117
NEWS
Просмотреть файл

@ -12,9 +12,9 @@ Copyright (c) 2006-2018 Cisco Systems, Inc. All rights reserved.
Copyright (c) 2006 Voltaire, Inc. All rights reserved.
Copyright (c) 2006 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Copyright (c) 2006-2017 Los Alamos National Security, LLC. All rights
Copyright (c) 2006-2018 Los Alamos National Security, LLC. All rights
reserved.
Copyright (c) 2010-2017 IBM Corporation. All rights reserved.
Copyright (c) 2010-2018 IBM Corporation. All rights reserved.
Copyright (c) 2012 Oak Ridge National Labs. All rights reserved.
Copyright (c) 2012 Sandia National Laboratories. All rights reserved.
Copyright (c) 2012 University of Houston. All rights reserved.
@ -55,8 +55,8 @@ included in the vX.Y.Z section and be denoted as:
(** also appeared: A.B.C) -- indicating that this item was previously
included in release version vA.B.C.
Master (not on release branches yet)
------------------------------------
4.0.0 -- September, 2018
------------------------
**********************************************************************
* PRE-DEPRECATION WARNING: MPIR Support
@ -80,6 +80,53 @@ Master (not on release branches yet)
Currently, this means the Open SHMEM layer will only build if
a MXM or UCX library is found.
3.1.2 -- August, 2018
------------------------
- A subtle race condition bug was discovered in the "vader" BTL
(shared memory communications) that, in rare instances, can cause
MPI processes to crash or incorrectly classify (or effectively drop)
an MPI message sent via shared memory. If you are using the "ob1"
PML with "vader" for shared memory communication (note that vader is
the default for shared memory communication with ob1), you need to
upgrade to v3.1.2 or later to fix this issue. You may also upgrade
to the following versions to fix this issue:
- Open MPI v2.1.5 (expected end of August, 2018) or later in the
v2.1.x series
- Open MPI v3.0.1 (released March, 2018) or later in the v3.0.x
series
- Assorted Portals 4.0 bug fixes.
- Fix for possible data corruption in MPI_BSEND.
- Move shared memory file for vader btl into /dev/shm on Linux.
- Fix for MPI_ISCATTER/MPI_ISCATTERV Fortran interfaces with MPI_IN_PLACE.
- Upgrade PMIx to v2.1.3.
- Numerous One-sided bug fixes.
- Fix for race condition in uGNI BTL.
- Improve handling of large number of interfaces with TCP BTL.
- Numerous UCX bug fixes.
3.1.1 -- June, 2018
-------------------
- Fix potential hang in UCX PML during MPI_FINALIZE
- Update internal PMIx to v2.1.2rc2 to fix forward version compatibility.
- Add new MCA parameter osc_sm_backing_store to allow users to specify
where in the filesystem the backing file for the shared memory
one-sided component should live. Defaults to /dev/shm on Linux.
- Fix potential hang on non-x86 platforms when using builds with
optimization flags turned off.
- Disable osc/pt2pt when using MPI_THREAD_MULTIPLE due to numerous
race conditions in the component.
- Fix dummy variable names for the mpi and mpi_f08 Fortran bindings to
match the MPI standard. This may break applications which use
name-based parameters in Fortran which used our internal names
rather than those documented in the MPI standard.
- Revamp Java detection to properly handle new Java versions which do
not provide a javah wrapper.
- Fix RMA function signatures for use-mpi-f08 bindings to have the
asynchonous property on all buffers.
- Improved configure logic for finding the UCX library.
3.1.0 -- May, 2018
------------------
@ -246,6 +293,68 @@ Known issues:
- MPI_Connect/accept between applications started by different mpirun
commands will fail, even if ompi-server is running.
2.1.5 -- August 2018
--------------------
- A subtle race condition bug was discovered in the "vader" BTL
(shared memory communications) that, in rare instances, can cause
MPI processes to crash or incorrectly classify (or effectively drop)
an MPI message sent via shared memory. If you are using the "ob1"
PML with "vader" for shared memory communication (note that vader is
the default for shared memory communication with ob1), you need to
upgrade to v2.1.5 to fix this issue. You may also upgrade to the
following versions to fix this issue:
- Open MPI v3.0.1 (released March, 2018) or later in the v3.0.x
series
- Open MPI v3.1.2 (expected end of August, 2018) or later
- A link issue was fixed when the UCX library was not located in the
linker-default search paths.
2.1.4 -- August, 2018
---------------------
Bug fixes/minor improvements:
- Disable the POWER 7/BE block in configure. Note that POWER 7/BE is
still not a supported platform, but it is no longer automatically
disabled. See
https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
for more information.
- Fix bug with request-based one-sided MPI operations when using the
"rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
in some environments. Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
- Support for the QLogic RoCE HCA
- Support for the Boradcom Cumulus RoCE HCA
- Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
during the MPI_COMM_SELF destruction in MPI_FINALIZE. Thanks to
@AndrewGaspar for reporting the issue.
- Java fixes:
- Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
- Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
- Use conformant dummy parameter names for Fortran bindings. Thanks
to Themos Tsikas for reporting and submitting the fixes.
- Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible. Thanks to Themos Tsikas for reporting the
issue.
- Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
- Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
is used. This component is simply not safe in MPI_THREAD_MULTIPLE
scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
to use an hwloc v2.x.y installation. hwloc v2.x.y will not be
supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection. Thanks to
Davide Vanzo for the report.
2.1.3 -- March, 2018
--------------------