1
1
This commit was SVN r12540.
Этот коммит содержится в:
Jeff Squyres 2006-11-10 17:41:42 +00:00
родитель 5419219041
Коммит e124857353

78
README
Просмотреть файл

@ -41,8 +41,40 @@ Thanks for your time.
===========================================================================
Release notes for SC 2006 beta:
- This tarball has had some level of testing within the Open MPI
developers, but has not been widely tested with real world
applications. We'd appreciate your feedback.
- Running heterogenious MPI jobs (e.g., different endian between
multiple hosts) is not fully working yet.
- Some MPI jobs may not clean up properly if errors occur:
- mpirun may "hang" and have to be killed from another session
- ORTE daemons may be left orphaned on some nodes and need to be
manually killed
- Not all items marked as resolved in the v1.1.3 section of the NEWS
file have been migrated to this 1.2 release yet. A new beta release
is expected shortly after SC 2006 that contains these fixes:
- Workaround for Intel C++ compiler bug
- MPI_SIZEOF fixes for COMPLEX types
- MPI jobs running on OpenIB / OpenFabrics networks that
simultaneously post a large number of non-blocking sends of small
messages may fail. This issue is fixed on the Open MPI development
head, and will be included in the post-SC beta release.
- Striping MPI messages across multiple TCP interfaces is not working.
- See Open MPI's bug tracking system for a full list of outstanding
issues: https://svn.open-mpi.org/trac/ompi/report
===========================================================================
The following abbreviated list of release notes applies to this code
base as of this writing (12 Sep 2006):
base as of this writing (10 Nov 2006):
- Open MPI includes support for a wide variety of supplemental
hardware and software package. When configuring Open MPI, you may
@ -87,6 +119,8 @@ base as of this writing (12 Sep 2006):
- Linux, 64 bit (x86), with gcc
- OS X (10.3), 32 bit, with gcc
- OS X (10.4), 32 bit, with gcc
- Solaris 10 update 2, SPARC and AMD, 32 and 64 bit, with Sun Studio
10
- Other systems have been lightly (but not fully tested):
- Other compilers on Linux, 32 and 64 bit
@ -197,12 +231,6 @@ base as of this writing (12 Sep 2006):
shared memory, and Myrinet/GM. Myrinet/GM has only been lightly
tested.
- Due to limitations in the Libtool 1.5 series, Fortran 90 MPI
bindings support can only be built as a static library. It is
expected that Libtool 2.0 (and therefore future releases of Open
MPI) will be able to support shared libraries for the Fortran 90
bindings.
- The XGrid support is experimental - see the Open MPI FAQ and this
post on the Open MPI user's mailing list for more information:
@ -226,16 +254,7 @@ base as of this writing (12 Sep 2006):
- The Open Fabrics Enterprise Distribution (OFED) software package
v1.0 will not work properly with Open MPI v1.2 (and later) due to
how its Mellanox InfiniBand plugin driver is created. The problem
is fixed OFED v1.1 (and beyond).
- The current version of the Open MPI point-to-point engine does not
yet support hardware-level MPI message matching. As such, MPI
message matching must be performed in software, artificially
increasing latency for short messages on certain networks (such as
MX and hardware-supported Portals). Future versions of Open MPI
will support hardware matching on networks that provide it, and will
eliminate the extra overhead of software MPI message matching where
possible.
is fixed OFED v1.1 (and later).
- The Fortran 90 MPI bindings can now be built in one of three sizes
using --with-mpi-f90-size=SIZE (see description below). These sizes
@ -327,13 +346,23 @@ for a full list); a summary of the more commonly used ones follows:
most cases. This option is only needed for special configurations.
--with-openib=<directory>
Specify the directory where the Open Fabrics (previously known as
Specify the directory where the OpenFabrics (previously known as
OpenIB) libraries and header files are located. This enables Open
Fabrics support in Open MPI.
--with-openib-libdir=<directory>
Look in directory for the OPENIB libraries. By default, Open MPI will
look in <openib directory>/lib and <openib directory>/lib64, which covers
Look in directory for the OpenFabrics libraries. By default, Open
MPI will look in <openib directory>/lib and <openib
directory>/lib64, which covers most cases. This option is only
needed for special configurations.
--with-psm=<directory>
Specify the directory where the QLogic PSM library and header files
are located. This enables InfiniPath support in Open MPI.
--with-psm-libdir=<directory>
Look in directory for the PSM libraries. By default, Open MPI will
look in <psm directory>/lib and <psm directory>/lib64, which covers
most cases. This option is only needed for special configurations.
--with-tm=<directory>
@ -601,12 +630,14 @@ MPI component frameworks:
allocator - Memory allocator
bml - BTL management layer
btl - MPI point-to-point byte transfer layer
btl - MPI point-to-point byte transfer layer, used for MPI
point-to-point messages on some types of networks
coll - MPI collective algorithms
io - MPI-2 I/O
mpool - Memory pooling
mtl - Matching transport layer, used for MPI point-to-point
messages on some types of networks
pml - MPI point-to-point management layer
ptl - (Outdated / deprecated) MPI point-to-point transport layer
rcache - Memory registration cache
topo - MPI topology routines
@ -617,6 +648,7 @@ errmgr - RTE error manager
gpr - General purpose registry
iof - I/O forwarding
ns - Name server
odls - OpenRTE daemon local launch subsystem
oob - Out of band messaging
pls - Process launch system
ras - Resource allocation system
@ -626,7 +658,7 @@ rmgr - Resource manager
rml - RTE message layer
schema - Name schemas
sds - Startup / discovery service
soh - State of health monitor
smr - State-of-health monitoring subsystem
Miscellaneous frameworks:
-------------------------