2007-03-02 01:54:50 +00:00
|
|
|
Copyright (c) 2004-2007 The Trustees of Indiana University and Indiana
|
2005-11-05 19:57:48 +00:00
|
|
|
University Research and Technology
|
|
|
|
Corporation. All rights reserved.
|
2007-03-02 01:54:50 +00:00
|
|
|
Copyright (c) 2004-2007 The University of Tennessee and The University
|
2005-11-05 19:57:48 +00:00
|
|
|
of Tennessee Research Foundation. All rights
|
|
|
|
reserved.
|
2008-12-03 08:29:28 +00:00
|
|
|
Copyright (c) 2004-2008 High Performance Computing Center Stuttgart,
|
2004-11-28 20:09:25 +00:00
|
|
|
University of Stuttgart. All rights reserved.
|
2007-09-19 17:48:15 +00:00
|
|
|
Copyright (c) 2004-2007 The Regents of the University of California.
|
2005-03-24 12:43:37 +00:00
|
|
|
All rights reserved.
|
2010-01-14 19:21:41 +00:00
|
|
|
Copyright (c) 2006-2010 Cisco Systems, Inc. All rights reserved.
|
2007-09-19 17:48:15 +00:00
|
|
|
Copyright (c) 2006-2007 Voltaire, Inc. All rights reserved.
|
2010-08-26 14:18:14 +00:00
|
|
|
Copyright (c) 2006-2010 Oracle and/or its affiliates. All rights reserved.
|
2007-03-02 01:54:50 +00:00
|
|
|
Copyright (c) 2007 Myricom, Inc. All rights reserved.
|
2008-11-04 21:27:37 +00:00
|
|
|
Copyright (c) 2008 IBM Corporation. All rights reserved.
|
2010-08-26 14:40:35 +00:00
|
|
|
Copyright (c) 2010 Oak Ridge National Labs. All rights reserved.
|
2004-11-22 01:38:40 +00:00
|
|
|
$COPYRIGHT$
|
|
|
|
|
|
|
|
Additional copyrights may follow
|
|
|
|
|
2004-11-22 00:37:56 +00:00
|
|
|
$HEADER$
|
2007-08-31 17:59:01 +00:00
|
|
|
|
2005-08-26 13:17:01 +00:00
|
|
|
===========================================================================
|
2010-02-08 22:06:09 +00:00
|
|
|
|
2009-04-14 20:58:59 +00:00
|
|
|
When submitting questions and problems, be sure to include as much
|
|
|
|
extra information as possible. This web page details all the
|
|
|
|
information that we request in order to provide assistance:
|
|
|
|
|
|
|
|
http://www.open-mpi.org/community/help/
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
The best way to report bugs, send comments, or ask questions is to
|
|
|
|
sign up on the user's and/or developer's mailing list (for user-level
|
|
|
|
and developer-level questions; when in doubt, send to the user's
|
|
|
|
list):
|
|
|
|
|
|
|
|
users@open-mpi.org
|
|
|
|
devel@open-mpi.org
|
|
|
|
|
|
|
|
Because of spam, only subscribers are allowed to post to these lists
|
|
|
|
(ensure that you subscribe with and post from exactly the same e-mail
|
|
|
|
address -- joe@example.com is considered different than
|
|
|
|
joe@mycomputer.example.com!). Visit these pages to subscribe to the
|
|
|
|
lists:
|
|
|
|
|
|
|
|
http://www.open-mpi.org/mailman/listinfo.cgi/users
|
|
|
|
http://www.open-mpi.org/mailman/listinfo.cgi/devel
|
|
|
|
|
|
|
|
Thanks for your time.
|
|
|
|
|
|
|
|
===========================================================================
|
|
|
|
|
2007-02-27 20:01:38 +00:00
|
|
|
Much, much more information is also available in the Open MPI FAQ:
|
|
|
|
|
|
|
|
http://www.open-mpi.org/faq/
|
|
|
|
|
|
|
|
===========================================================================
|
|
|
|
|
2010-07-16 13:20:11 +00:00
|
|
|
Detailed Open MPI v1.5 Feature List:
|
2008-11-04 21:27:37 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
o Open MPI RunTime Environment (ORTE) improvements
|
|
|
|
- General robustness improvements
|
|
|
|
- Scalable job launch (we've seen ~16K processes in less than a
|
|
|
|
minute in a highly-optimized configuration)
|
2008-11-04 21:27:37 +00:00
|
|
|
- New process mappers
|
2008-11-15 17:34:38 +00:00
|
|
|
- Support for Platform/LSF environments (v7.0.2 and later)
|
2008-11-04 21:27:37 +00:00
|
|
|
- More flexible processing of host lists
|
2008-11-15 15:27:05 +00:00
|
|
|
- new mpirun cmd line options and associated functionality
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o Fault-Tolerance Features
|
2008-11-15 15:27:05 +00:00
|
|
|
- Asynchronous, transparent checkpoint/restart support
|
2008-11-04 21:27:37 +00:00
|
|
|
- Fully coordinated checkpoint/restart coordination component
|
|
|
|
- Support for the following checkpoint/restart services:
|
2010-08-26 14:40:35 +00:00
|
|
|
- blcr: Berkeley Lab's Checkpoint/Restart
|
2008-11-04 21:27:37 +00:00
|
|
|
- self: Application level callbacks
|
|
|
|
- Support for the following interconnects:
|
|
|
|
- tcp
|
|
|
|
- mx
|
|
|
|
- openib
|
|
|
|
- sm
|
|
|
|
- self
|
|
|
|
- Improved Message Logging
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
o MPI_THREAD_MULTIPLE support for point-to-point messaging in the
|
|
|
|
following BTLs (note that only MPI point-to-point messaging API
|
|
|
|
functions support MPI_THREAD_MULTIPLE; other API functions likely
|
|
|
|
do not):
|
2008-11-04 21:27:37 +00:00
|
|
|
- tcp
|
|
|
|
- sm
|
|
|
|
- mx
|
|
|
|
- elan
|
|
|
|
- self
|
|
|
|
|
|
|
|
o Point-to-point Messaging Layer (PML) improvements
|
2008-11-15 15:27:05 +00:00
|
|
|
- Memory footprint reduction
|
2008-11-04 21:27:37 +00:00
|
|
|
- Improved latency
|
2008-11-15 15:27:05 +00:00
|
|
|
- Improved algorithm for multiple communication device
|
|
|
|
("multi-rail") support
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o Numerous Open Fabrics improvements/enhancements
|
|
|
|
- Added iWARP support (including RDMA CM)
|
2008-11-15 15:27:05 +00:00
|
|
|
- Memory footprint and performance improvements
|
2008-11-04 21:27:37 +00:00
|
|
|
- "Bucket" SRQ support for better registered memory utilization
|
|
|
|
- XRC/ConnectX support
|
2008-11-15 15:27:05 +00:00
|
|
|
- Message coalescing
|
2008-11-04 21:27:37 +00:00
|
|
|
- Improved error report mechanism with Asynchronous events
|
|
|
|
- Automatic Path Migration (APM)
|
|
|
|
- Improved processor/port binding
|
|
|
|
- Infrastructure for additional wireup strategies
|
2008-11-15 15:27:05 +00:00
|
|
|
- mpi_leave_pinned is now enabled by default
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o uDAPL BTL enhancements
|
|
|
|
- Multi-rail support
|
|
|
|
- Subnet checking
|
2008-11-15 15:27:05 +00:00
|
|
|
- Interface include/exclude capabilities
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o Processor affinity
|
|
|
|
- Linux processor affinity improvements
|
2008-11-15 15:27:05 +00:00
|
|
|
- Core/socket <--> process mappings
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o Collectives
|
|
|
|
- Performance improvements
|
2009-02-10 22:40:19 +00:00
|
|
|
- Support for hierarchical collectives (must be activated
|
|
|
|
manually; see below)
|
2010-08-02 12:21:29 +00:00
|
|
|
- Support for Voltaire FCA (Fabric Collective Accelerator) technology
|
2008-11-04 21:27:37 +00:00
|
|
|
|
|
|
|
o Miscellaneous
|
|
|
|
- MPI 2.1 compliant
|
|
|
|
- Sparse process groups and communicators
|
|
|
|
- Support for Cray Compute Node Linux (CNL)
|
2008-11-15 15:27:05 +00:00
|
|
|
- One-sided RDMA component (BTL-level based rather than PML-level
|
|
|
|
based)
|
2008-11-04 21:27:37 +00:00
|
|
|
- Aggregate MCA parameter sets
|
|
|
|
- MPI handle debugging
|
|
|
|
- Many small improvements to the MPI C++ bindings
|
|
|
|
- Valgrind support
|
|
|
|
- VampirTrace support
|
2008-11-15 15:27:05 +00:00
|
|
|
- Updated ROMIO to the version from MPICH2 1.0.7
|
|
|
|
- Removed the mVAPI IB stacks
|
|
|
|
- Display most error messages only once (vs. once for each
|
|
|
|
process)
|
|
|
|
- Many other small improvements and bug fixes, too numerous to
|
|
|
|
list here
|
2008-11-04 21:27:37 +00:00
|
|
|
|
2008-12-09 20:18:16 +00:00
|
|
|
Known issues
|
|
|
|
------------
|
|
|
|
|
2009-04-14 20:58:59 +00:00
|
|
|
o MPI_REDUCE_SCATTER does not work with counts of 0.
|
|
|
|
https://svn.open-mpi.org/trac/ompi/ticket/1559
|
|
|
|
|
|
|
|
o Please also see the Open MPI bug tracker for bugs beyond this release.
|
|
|
|
https://svn.open-mpi.org/trac/ompi/report
|
2008-12-09 20:18:16 +00:00
|
|
|
|
2008-11-04 21:27:37 +00:00
|
|
|
===========================================================================
|
|
|
|
|
2005-08-08 15:25:59 +00:00
|
|
|
The following abbreviated list of release notes applies to this code
|
2011-02-02 15:17:30 +00:00
|
|
|
base as of this writing (5 October 2010):
|
2008-11-15 15:27:05 +00:00
|
|
|
|
|
|
|
General notes
|
|
|
|
-------------
|
2005-08-26 13:17:01 +00:00
|
|
|
|
|
|
|
- Open MPI includes support for a wide variety of supplemental
|
|
|
|
hardware and software package. When configuring Open MPI, you may
|
|
|
|
need to supply additional flags to the "configure" script in order
|
|
|
|
to tell Open MPI where the header files, libraries, and any other
|
|
|
|
required files are located. As such, running "configure" by itself
|
2009-01-13 22:42:34 +00:00
|
|
|
may not include support for all the devices (etc.) that you expect,
|
2005-08-26 13:17:01 +00:00
|
|
|
especially if their support headers / libraries are installed in
|
|
|
|
non-standard locations. Network interconnects are an easy example
|
2008-07-03 18:47:18 +00:00
|
|
|
to discuss -- Myrinet and OpenFabrics networks, for example, both
|
|
|
|
have supplemental headers and libraries that must be found before
|
|
|
|
Open MPI can build support for them. You must specify where these
|
|
|
|
files are with the appropriate options to configure. See the
|
|
|
|
listing of configure command-line switches, below, for more details.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
- The majority of Open MPI's documentation is here in this file, the
|
|
|
|
included man pages, and on the web site FAQ
|
|
|
|
(http://www.open-mpi.org/). This will eventually be supplemented
|
|
|
|
with cohesive installation and user documentation files.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
- Note that Open MPI documentation uses the word "component"
|
|
|
|
frequently; the word "plugin" is probably more familiar to most
|
|
|
|
users. As such, end users can probably completely substitute the
|
|
|
|
word "plugin" wherever you see "component" in our documentation.
|
|
|
|
For what it's worth, we use the word "component" for historical
|
|
|
|
reasons, mainly because it is part of our acronyms and internal API
|
|
|
|
functionc calls.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2005-08-26 13:17:01 +00:00
|
|
|
- The run-time systems that are currently supported are:
|
2005-08-05 12:21:32 +00:00
|
|
|
- rsh / ssh
|
2007-02-27 20:01:38 +00:00
|
|
|
- LoadLeveler
|
|
|
|
- PBS Pro, Open PBS, Torque
|
2008-11-15 17:34:38 +00:00
|
|
|
- Platform LSF (v7.0.2 and later)
|
2005-08-26 13:17:01 +00:00
|
|
|
- SLURM
|
2007-02-27 20:01:38 +00:00
|
|
|
- Cray XT-3 and XT-4
|
2008-11-17 18:48:41 +00:00
|
|
|
- Sun Grid Engine (SGE) 6.1, 6.2 and open source Grid Engine
|
2008-12-03 13:34:51 +00:00
|
|
|
- Microsoft Windows CCP (Microsoft Windows server 2003 and 2008)
|
2005-08-05 12:21:32 +00:00
|
|
|
|
|
|
|
- Systems that have been tested are:
|
2008-11-17 18:48:41 +00:00
|
|
|
- Linux (various flavors/distros), 32 bit, with gcc, and Sun Studio 12
|
2008-11-15 15:27:05 +00:00
|
|
|
- Linux (various flavors/distros), 64 bit (x86), with gcc, Absoft,
|
2008-11-17 18:48:41 +00:00
|
|
|
Intel, Portland, Pathscale, and Sun Studio 12 compilers (*)
|
2007-02-27 20:01:38 +00:00
|
|
|
- OS X (10.4), 32 and 64 bit (i386, PPC, PPC64, x86_64), with gcc
|
2008-11-15 15:27:05 +00:00
|
|
|
and Absoft compilers (*)
|
2008-11-17 18:48:41 +00:00
|
|
|
- Solaris 10 update 2, 3 and 4, 32 and 64 bit (SPARC, i386, x86_64),
|
|
|
|
with Sun Studio 10, 11 and 12
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
(*) Be sure to read the Compiler Notes, below.
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
- Other systems have been lightly (but not fully tested):
|
2007-02-27 20:01:38 +00:00
|
|
|
- Other 64 bit platforms (e.g., Linux on PPC64)
|
2008-12-03 13:34:51 +00:00
|
|
|
- Microsoft Windows CCP (Microsoft Windows server 2003 and 2008);
|
2010-01-14 19:21:41 +00:00
|
|
|
see the README.WINDOWS file.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
Compiler Notes
|
|
|
|
--------------
|
2006-05-31 14:12:43 +00:00
|
|
|
|
2009-04-14 20:58:59 +00:00
|
|
|
- Mixing compilers from different vendors when building Open MPI
|
|
|
|
(e.g., using the C/C++ compiler from one vendor and the F77/F90
|
|
|
|
compiler from a different vendor) has been successfully employed by
|
|
|
|
some Open MPI users (discussed on the Open MPI user's mailing list),
|
|
|
|
but such configurations are not tested and not documented. For
|
|
|
|
example, such configurations may require additional compiler /
|
|
|
|
linker flags to make Open MPI build properly.
|
|
|
|
|
2006-06-17 18:45:29 +00:00
|
|
|
- Open MPI does not support the Sparc v8 CPU target, which is the
|
|
|
|
default on Sun Solaris. The v8plus (32 bit) or v9 (64 bit)
|
|
|
|
targets must be used to build Open MPI on Solaris. This can be
|
|
|
|
done by including a flag in CFLAGS, CXXFLAGS, FFLAGS, and FCFLAGS,
|
2010-08-26 14:18:14 +00:00
|
|
|
-xarch=v8plus for the Sun compilers, -mcpu=v9 for GCC.
|
2006-06-17 18:45:29 +00:00
|
|
|
|
2005-10-12 01:14:49 +00:00
|
|
|
- At least some versions of the Intel 8.1 compiler seg fault while
|
|
|
|
compiling certain Open MPI source code files. As such, it is not
|
|
|
|
supported.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2006-08-01 18:48:34 +00:00
|
|
|
- The Intel 9.0 v20051201 compiler on IA64 platforms seems to have a
|
|
|
|
problem with optimizing the ptmalloc2 memory manager component (the
|
|
|
|
generated code will segv). As such, the ptmalloc2 component will
|
|
|
|
automatically disable itself if it detects that it is on this
|
|
|
|
platform/compiler combination. The only effect that this should
|
|
|
|
have is that the MCA parameter mpi_leave_pinned will be inoperative.
|
|
|
|
|
2005-10-25 19:29:46 +00:00
|
|
|
- Early versions of the Portland Group 6.0 compiler have problems
|
|
|
|
creating the C++ MPI bindings as a shared library (e.g., v6.0-1).
|
|
|
|
Tests with later versions show that this has been fixed (e.g.,
|
|
|
|
v6.0-5).
|
|
|
|
|
2007-05-05 05:00:27 +00:00
|
|
|
- The Portland Group compilers prior to version 7.0 require the
|
|
|
|
"-Msignextend" compiler flag to extend the sign bit when converting
|
|
|
|
from a shorter to longer integer. This is is different than other
|
|
|
|
compilers (such as GNU). When compiling Open MPI with the Portland
|
|
|
|
compiler suite, the following flags should be passed to Open MPI's
|
|
|
|
configure script:
|
2006-03-29 23:09:02 +00:00
|
|
|
|
2007-04-26 00:37:14 +00:00
|
|
|
shell$ ./configure CFLAGS=-Msignextend CXXFLAGS=-Msignextend \
|
2006-03-29 23:09:02 +00:00
|
|
|
--with-wrapper-cflags=-Msignextend \
|
|
|
|
--with-wrapper-cxxflags=-Msignextend ...
|
|
|
|
|
|
|
|
This will both compile Open MPI with the proper compile flags and
|
|
|
|
also automatically add "-Msignextend" when the C and C++ MPI wrapper
|
|
|
|
compilers are used to compile user MPI applications.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
- Using the MPI C++ bindings with the Pathscale compiler is known
|
|
|
|
to fail, possibly due to Pathscale compiler issues.
|
|
|
|
|
|
|
|
- Using the Absoft compiler to build the MPI Fortran bindings on Suse
|
|
|
|
9.3 is known to fail due to a Libtool compatibility issue.
|
|
|
|
|
2005-11-22 15:24:39 +00:00
|
|
|
- Open MPI will build bindings suitable for all common forms of
|
|
|
|
Fortran 77 compiler symbol mangling on platforms that support it
|
|
|
|
(e.g., Linux). On platforms that do not support weak symbols (e.g.,
|
|
|
|
OS X), Open MPI will build Fortran 77 bindings just for the compiler
|
|
|
|
that Open MPI was configured with.
|
|
|
|
|
|
|
|
Hence, on platforms that support it, if you configure Open MPI with
|
|
|
|
a Fortran 77 compiler that uses one symbol mangling scheme, you can
|
|
|
|
successfully compile and link MPI Fortran 77 applications with a
|
|
|
|
Fortran 77 compiler that uses a different symbol mangling scheme.
|
|
|
|
|
|
|
|
NOTE: For platforms that support the multi-Fortran-compiler bindings
|
|
|
|
(i.e., weak symbols are supported), due to limitations in the MPI
|
|
|
|
standard and in Fortran compilers, it is not possible to hide these
|
|
|
|
differences in all cases. Specifically, the following two cases may
|
|
|
|
not be portable between different Fortran compilers:
|
|
|
|
|
|
|
|
1. The C constants MPI_F_STATUS_IGNORE and MPI_F_STATUSES_IGNORE
|
|
|
|
will only compare properly to Fortran applications that were
|
|
|
|
created with Fortran compilers that that use the same
|
|
|
|
name-mangling scheme as the Fortran compiler that Open MPI was
|
|
|
|
configured with.
|
|
|
|
|
|
|
|
2. Fortran compilers may have different values for the logical
|
2006-02-27 11:42:21 +00:00
|
|
|
.TRUE. constant. As such, any MPI function that uses the Fortran
|
2005-11-22 15:24:39 +00:00
|
|
|
LOGICAL type may only get .TRUE. values back that correspond to
|
|
|
|
the the .TRUE. value of the Fortran compiler that Open MPI was
|
2005-11-29 01:06:44 +00:00
|
|
|
configured with. Note that some Fortran compilers allow forcing
|
|
|
|
.TRUE. to be 1 and .FALSE. to be 0. For example, the Portland
|
|
|
|
Group compilers provide the "-Munixlogical" option, and Intel
|
|
|
|
compilers (version >= 8.) provide the "-fpscomp logicals" option.
|
2005-11-22 15:24:39 +00:00
|
|
|
|
|
|
|
You can use the ompi_info command to see the Fortran compiler that
|
|
|
|
Open MPI was configured with.
|
|
|
|
|
2006-06-17 10:41:10 +00:00
|
|
|
- The Fortran 90 MPI bindings can now be built in one of three sizes
|
2006-04-25 21:49:50 +00:00
|
|
|
using --with-mpi-f90-size=SIZE (see description below). These sizes
|
|
|
|
reflect the number of MPI functions included in the "mpi" Fortran 90
|
|
|
|
module and therefore which functions will be subject to strict type
|
|
|
|
checking. All functions not included in the Fortran 90 module can
|
|
|
|
still be invoked from F90 applications, but will fall back to
|
|
|
|
Fortran-77 style checking (i.e., little/none).
|
2006-04-25 16:58:27 +00:00
|
|
|
|
|
|
|
- trivial: Only includes F90-specific functions from MPI-2. This
|
|
|
|
means overloaded versions of MPI_SIZEOF for all the MPI-supported
|
|
|
|
F90 intrinsic types.
|
|
|
|
|
|
|
|
- small (default): All the functions in "trivial" plus all MPI
|
|
|
|
functions that take no choice buffers (meaning buffers that are
|
|
|
|
specified by the user and are of type (void*) in the C bindings --
|
|
|
|
generally buffers specified for message passing). Hence,
|
|
|
|
functions like MPI_COMM_RANK are included, but functions like
|
|
|
|
MPI_SEND are not.
|
|
|
|
|
|
|
|
- medium: All the functions in "small" plus all MPI functions that
|
|
|
|
take one choice buffer (e.g., MPI_SEND, MPI_RECV, ...). All
|
|
|
|
one-choice-buffer functions have overloaded variants for each of
|
|
|
|
the MPI-supported Fortran intrinsic types up to the number of
|
|
|
|
dimensions specified by --with-f90-max-array-dim (default value is
|
|
|
|
4).
|
|
|
|
|
|
|
|
Increasing the size of the F90 module (in order from trivial, small,
|
2006-06-17 10:41:10 +00:00
|
|
|
and medium) will generally increase the length of time required to
|
|
|
|
compile user MPI applications. Specifically, "trivial"- and
|
|
|
|
"small"-sized F90 modules generally allow user MPI applications to
|
|
|
|
be compiled fairly quickly but lose type safety for all MPI
|
|
|
|
functions with choice buffers. "medium"-sized F90 modules generally
|
|
|
|
take longer to compile user applications but provide greater type
|
|
|
|
safety for MPI functions.
|
|
|
|
|
|
|
|
Note that MPI functions with two choice buffers (e.g., MPI_GATHER)
|
|
|
|
are not currently included in Open MPI's F90 interface. Calls to
|
|
|
|
these functions will automatically fall through to Open MPI's F77
|
|
|
|
interface. A "large" size that includes the two choice buffer MPI
|
|
|
|
functions is possible in future versions of Open MPI.
|
2006-04-25 16:58:27 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
|
|
|
|
General Run-Time Support Notes
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
- The Open MPI installation must be in your PATH on all nodes (and
|
|
|
|
potentially LD_LIBRARY_PATH, if libmpi is a shared library), unless
|
|
|
|
using the --prefix or --enable-mpirun-prefix-by-default
|
|
|
|
functionality (see below).
|
|
|
|
|
|
|
|
- Open MPI's run-time behavior can be customized via MCA ("MPI
|
|
|
|
Component Architecture") parameters (see below for more information
|
|
|
|
on how to get/set MCA parameter values). Some MCA parameters can be
|
|
|
|
set in a way that renders Open MPI inoperable (see notes about MCA
|
|
|
|
parameters later in this file). In particular, some parameters have
|
|
|
|
required options that must be included.
|
|
|
|
|
|
|
|
- If specified, the "btl" parameter must include the "self"
|
|
|
|
component, or Open MPI will not be able to deliver messages to the
|
|
|
|
same rank as the sender. For example: "mpirun --mca btl tcp,self
|
|
|
|
..."
|
|
|
|
- If specified, the "btl_tcp_if_exclude" paramater must include the
|
|
|
|
loopback device ("lo" on many Linux platforms), or Open MPI will
|
|
|
|
not be able to route MPI messages using the TCP BTL. For example:
|
|
|
|
"mpirun --mca btl_tcp_if_exclude lo,eth1 ..."
|
|
|
|
|
|
|
|
- Running on nodes with different endian and/or different datatype
|
|
|
|
sizes within a single parallel job is supported in this release.
|
|
|
|
However, Open MPI does not resize data when datatypes differ in size
|
|
|
|
(for example, sending a 4 byte MPI_DOUBLE and receiving an 8 byte
|
|
|
|
MPI_DOUBLE will fail).
|
|
|
|
|
|
|
|
|
|
|
|
MPI Functionality and Features
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
- All MPI-2.1 functionality is supported.
|
|
|
|
|
|
|
|
- MPI_THREAD_MULTIPLE support is included, but is only lightly tested.
|
|
|
|
It likely does not work for thread-intensive applications. Note
|
|
|
|
that *only* the MPI point-to-point communication functions for the
|
|
|
|
BTL's listed above are considered thread safe. Other support
|
|
|
|
functions (e.g., MPI attributes) have not been certified as safe
|
|
|
|
when simultaneously used by multiple threads.
|
|
|
|
|
2008-11-15 17:34:38 +00:00
|
|
|
Note that Open MPI's thread support is in a fairly early stage; the
|
|
|
|
above devices are likely to *work*, but the latency is likely to be
|
|
|
|
fairly high. Specifically, efforts so far have concentrated on
|
|
|
|
*correctness*, not *performance* (yet).
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
- MPI_REAL16 and MPI_COMPLEX32 are only supported on platforms where a
|
|
|
|
portable C datatype can be found that matches the Fortran type
|
|
|
|
REAL*16, both in size and bit representation.
|
|
|
|
|
2010-07-16 13:20:11 +00:00
|
|
|
- The "libompitrace" library is bundled in Open MPI and is installed
|
|
|
|
by default (it can be disabled via the --disable-libompitrace
|
|
|
|
flag). This library provides a simplistic tracing of select MPI
|
|
|
|
function calls via the MPI profiling interface. Linking it in to
|
|
|
|
your appliation via (e.g., via -lompitrace) will automatically
|
|
|
|
output to stderr when some MPI functions are invoked:
|
|
|
|
|
|
|
|
$ mpicc hello_world.c -o hello_world -lompitrace
|
|
|
|
$ mpirun -np 1 hello_world.c
|
|
|
|
MPI_INIT: argc 1
|
|
|
|
Hello, world, I am 0 of 1
|
|
|
|
MPI_BARRIER[0]: comm MPI_COMM_WORLD
|
|
|
|
MPI_FINALIZE[0]
|
|
|
|
$
|
|
|
|
|
|
|
|
Keep in mind that the output from the trace library is going to
|
|
|
|
stderr, so it may output in a slightly different order than the
|
|
|
|
stdout from your application.
|
|
|
|
|
|
|
|
This library is being offered as a "proof of concept" / convenience
|
|
|
|
from Open MPI. If there is interest, it is trivially easy to extend
|
|
|
|
it to printf for other MPI functions. Patches and/or suggestions
|
|
|
|
would be greatfully appreciated on the Open MPI developer's list.
|
2008-11-15 15:27:05 +00:00
|
|
|
|
2009-02-10 22:40:19 +00:00
|
|
|
Collectives
|
|
|
|
-----------
|
|
|
|
|
|
|
|
- The "hierarch" coll component (i.e., an implementation of MPI
|
|
|
|
collective operations) attempts to discover network layers of
|
|
|
|
latency in order to segregate individual "local" and "global"
|
|
|
|
operations as part of the overall collective operation. In this
|
|
|
|
way, network traffic can be reduced -- or possibly even minimized
|
|
|
|
(similar to MagPIe). The current "hierarch" component only
|
|
|
|
separates MPI processes into on- and off-node groups.
|
|
|
|
|
|
|
|
Hierarch has had sufficient correctness testing, but has not
|
|
|
|
received much performance tuning. As such, hierarch is not
|
|
|
|
activated by default -- it must be enabled manually by setting its
|
|
|
|
priority level to 100:
|
|
|
|
|
|
|
|
mpirun --mca coll_hierarch_priority 100 ...
|
|
|
|
|
|
|
|
We would appreciate feedback from the user community about how well
|
|
|
|
hierarch works for your applications.
|
|
|
|
|
2010-08-02 12:21:29 +00:00
|
|
|
- The "fca" coll component: Voltaire Fabric Collective Accelerator (FCA)
|
|
|
|
is a solution for offloading collective operations from the MPI process
|
|
|
|
onto Voltaire QDR InfiniBand switch CPUs.
|
|
|
|
|
|
|
|
See http://www.voltaire.com/Products/Application_Acceleration_Software/voltaire_fabric_collective_accelerator_fca
|
|
|
|
for details.
|
|
|
|
|
2009-02-10 22:40:19 +00:00
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
Network Support
|
|
|
|
---------------
|
|
|
|
|
|
|
|
- The OpenFabrics Enterprise Distribution (OFED) software package v1.0
|
|
|
|
will not work properly with Open MPI v1.2 (and later) due to how its
|
|
|
|
Mellanox InfiniBand plugin driver is created. The problem is fixed
|
|
|
|
OFED v1.1 (and later).
|
|
|
|
|
|
|
|
- Older mVAPI-based InfiniBand drivers (Mellanox VAPI) are no longer
|
|
|
|
supported. Please use an older version of Open MPI (1.2 series or
|
|
|
|
earlier) if you need mVAPI support.
|
|
|
|
|
|
|
|
- The use of fork() with the openib BTL is only partially supported,
|
|
|
|
and only on Linux kernels >= v2.6.15 with libibverbs v1.1 or later
|
|
|
|
(first released as part of OFED v1.2), per restrictions imposed by
|
|
|
|
the OFED network stack.
|
|
|
|
|
2010-01-14 19:21:41 +00:00
|
|
|
- There are three MPI network models available: "ob1", "csum", and
|
2010-01-14 19:14:21 +00:00
|
|
|
"cm". "ob1" and "csum" use BTL ("Byte Transfer Layer") components
|
|
|
|
for each supported network. "cm" uses MTL ("Matching Tranport
|
|
|
|
Layer") components for each supported network.
|
2007-02-27 20:01:38 +00:00
|
|
|
|
|
|
|
- "ob1" supports a variety of networks that can be used in
|
|
|
|
combination with each other (per OS constraints; e.g., there are
|
|
|
|
reports that the GM and OpenFabrics kernel drivers do not operate
|
|
|
|
well together):
|
2008-07-03 18:47:18 +00:00
|
|
|
- OpenFabrics: InfiniBand and iWARP
|
2007-02-27 20:01:38 +00:00
|
|
|
- Loopback (send-to-self)
|
2010-01-14 19:21:41 +00:00
|
|
|
- Myrinet: GM and MX (including Open-MX)
|
2007-02-27 20:01:38 +00:00
|
|
|
- Portals
|
2008-11-15 15:27:05 +00:00
|
|
|
- Quadrics Elan
|
2007-02-27 20:01:38 +00:00
|
|
|
- Shared memory
|
|
|
|
- TCP
|
2008-11-15 15:27:05 +00:00
|
|
|
- SCTP
|
2007-02-27 20:01:38 +00:00
|
|
|
- uDAPL
|
|
|
|
|
2010-01-14 19:14:21 +00:00
|
|
|
- "csum" is exactly the same as "ob1", except that it performs
|
|
|
|
additional data integrity checks to ensure that the received data
|
|
|
|
is intact (vs. trusting the underlying network to deliver the data
|
|
|
|
correctly). csum supports all the same networks as ob1, but there
|
|
|
|
is a performance penalty for the additional integrity checks.
|
|
|
|
|
2007-02-27 20:01:38 +00:00
|
|
|
- "cm" supports a smaller number of networks (and they cannot be
|
|
|
|
used together), but may provide better better overall MPI
|
|
|
|
performance:
|
2010-01-14 19:21:41 +00:00
|
|
|
- Myrinet MX (including Open-MX, but not GM)
|
2007-02-27 20:01:38 +00:00
|
|
|
- InfiniPath PSM
|
2007-09-19 17:48:15 +00:00
|
|
|
- Portals
|
2007-02-27 20:01:38 +00:00
|
|
|
|
2009-01-14 01:24:33 +00:00
|
|
|
Open MPI will, by default, choose to use "cm" when the InfiniPath
|
2010-01-14 19:14:21 +00:00
|
|
|
PSM MTL can be used. Otherwise, "ob1" will be used and the
|
|
|
|
corresponding BTLs will be selected. "csum" will never be selected
|
|
|
|
by default. Users can force the use of ob1 or cm if desired by
|
|
|
|
setting the "pml" MCA parameter at run-time:
|
2007-02-27 20:01:38 +00:00
|
|
|
|
|
|
|
shell$ mpirun --mca pml ob1 ...
|
2009-01-14 01:24:33 +00:00
|
|
|
or
|
2010-01-14 19:14:21 +00:00
|
|
|
shell$ mpirun --mca pml csum ...
|
|
|
|
or
|
2009-01-14 01:24:33 +00:00
|
|
|
shell$ mpirun --mca pml cm ...
|
2007-02-27 20:01:38 +00:00
|
|
|
|
2010-01-14 19:21:41 +00:00
|
|
|
- Myrinet MX (and Open-MX) support is shared between the 2 internal
|
|
|
|
devices, the MTL and the BTL. The design of the BTL interface in
|
|
|
|
Open MPI assumes that only naive one-sided communication
|
|
|
|
capabilities are provided by the low level communication layers.
|
|
|
|
However, modern communication layers such as Myrinet MX, InfiniPath
|
|
|
|
PSM, or Portals, natively implement highly-optimized two-sided
|
|
|
|
communication semantics. To leverage these capabilities, Open MPI
|
|
|
|
provides the "cm" PML and corresponding MTL components to transfer
|
|
|
|
messages rather than bytes. The MTL interface implements a shorter
|
|
|
|
code path and lets the low-level network library decide which
|
|
|
|
protocol to use (depending on issues such as message length,
|
|
|
|
internal resources and other parameters specific to the underlying
|
|
|
|
interconnect). However, Open MPI cannot currently use multiple MTL
|
|
|
|
modules at once. In the case of the MX MTL, process loopback and
|
|
|
|
on-node shared memory communications are provided by the MX library.
|
|
|
|
Moreover, the current MX MTL does not support message pipelining
|
|
|
|
resulting in lower performances in case of non-contiguous
|
|
|
|
data-types.
|
|
|
|
|
|
|
|
The "ob1" and "csum" PMLs and BTL components use Open MPI's internal
|
|
|
|
on-node shared memory and process loopback devices for high
|
|
|
|
performance. The BTL interface allows multiple devices to be used
|
|
|
|
simultaneously. For the MX BTL it is recommended that the first
|
|
|
|
segment (which is as a threshold between the eager and the
|
|
|
|
rendezvous protocol) should always be at most 4KB, but there is no
|
|
|
|
further restriction on the size of subsequent fragments.
|
2008-11-15 15:27:05 +00:00
|
|
|
|
|
|
|
The MX MTL is recommended in the common case for best performance on
|
|
|
|
10G hardware when most of the data transfers cover contiguous memory
|
|
|
|
layouts. The MX BTL is recommended in all other cases, such as when
|
|
|
|
using multiple interconnects at the same time (including TCP), or
|
|
|
|
transferring non contiguous data-types.
|
2007-03-02 01:46:22 +00:00
|
|
|
|
2009-12-16 01:17:02 +00:00
|
|
|
- Linux "knem" support is used when the "sm" (shared memory) BTL is
|
|
|
|
compiled with knem support (see the --with-knem configure option)
|
|
|
|
and the knem Linux module is loaded in the running kernel. If the
|
|
|
|
knem Linux kernel module is not loaded, the knem support is (by
|
|
|
|
default) silently deactivated during Open MPI jobs.
|
|
|
|
|
|
|
|
See http://runtime.bordeaux.inria.fr/knem/ for details on Knem.
|
|
|
|
|
2010-12-16 15:11:17 +00:00
|
|
|
Open MPI Extensions
|
|
|
|
-------------------
|
|
|
|
|
|
|
|
- Extensions framework added. See the "Open MPI API Extensions"
|
|
|
|
section below for more information on compiling and using
|
|
|
|
extensions.
|
|
|
|
|
|
|
|
- The following extensions are included in this version of Open MPI:
|
|
|
|
|
|
|
|
- affinity: Provides the OMPI_Affinity_str() routine on retrieving
|
|
|
|
a string that contains what resources a process is bound to. See
|
|
|
|
its man page for more details.
|
|
|
|
- cr: Provides routines to access to checkpoint restart routines.
|
|
|
|
See ompi/mpiext/cr/mpiext_cr_c.h for a listing of availble
|
|
|
|
functions.
|
|
|
|
- example: A non-functional extension; its only purpose is to
|
|
|
|
provide an example for how to create other extensions.
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
===========================================================================
|
|
|
|
|
|
|
|
Building Open MPI
|
|
|
|
-----------------
|
|
|
|
|
|
|
|
Open MPI uses a traditional configure script paired with "make" to
|
|
|
|
build. Typical installs can be of the pattern:
|
|
|
|
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
shell$ ./configure [...options...]
|
|
|
|
shell$ make all install
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
|
2005-08-08 15:25:59 +00:00
|
|
|
There are many available configure options (see "./configure --help"
|
2006-02-27 11:42:21 +00:00
|
|
|
for a full list); a summary of the more commonly used ones follows:
|
2005-08-05 12:21:32 +00:00
|
|
|
|
|
|
|
--prefix=<directory>
|
|
|
|
Install Open MPI into the base directory named <directory>. Hence,
|
|
|
|
Open MPI will place its executables in <directory>/bin, its header
|
|
|
|
files in <directory>/include, its libraries in <directory>/lib, etc.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-elan=<directory>
|
|
|
|
Specify the directory where the Quadrics Elan library and header
|
|
|
|
files are located. This option is generally only necessary if the
|
2009-01-13 23:37:15 +00:00
|
|
|
Elan headers and libraries are not in default compiler/linker
|
2008-11-15 15:27:05 +00:00
|
|
|
search paths.
|
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
Elan is the support library for Quadrics-based networks.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-elan-libdir=<directory>
|
2009-03-12 16:05:51 +00:00
|
|
|
Look in directory for the Quadrics Elan libraries. By default, Open
|
|
|
|
MPI will look in <elan directory>/lib and <elan directory>/lib64,
|
|
|
|
which covers most cases. This option is only needed for special
|
|
|
|
configurations.
|
2008-11-15 15:27:05 +00:00
|
|
|
|
2005-09-05 18:51:34 +00:00
|
|
|
--with-gm=<directory>
|
2005-08-05 12:21:32 +00:00
|
|
|
Specify the directory where the GM libraries and header files are
|
2008-11-15 15:27:05 +00:00
|
|
|
located. This option is generally only necessary if the GM headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
GM is the support library for older Myrinet-based networks (GM has
|
|
|
|
been obsoleted by MX).
|
|
|
|
|
2006-06-28 03:33:30 +00:00
|
|
|
--with-gm-libdir=<directory>
|
|
|
|
Look in directory for the GM libraries. By default, Open MPI will
|
|
|
|
look in <gm directory>/lib and <gm directory>/lib64, which covers
|
|
|
|
most cases. This option is only needed for special configurations.
|
|
|
|
|
Make the hwloc paffinity component available for everyone. hwloc
supports a wide variety of operating systems and platforms; see the
opal/mca/paffinity/hwloc/hwloc/README file for details.
This component includes an embedded copy of hwloc, currently based on
hwloc-1.0rc6. But note that hwloc is properly SVN imported into the
/vendor branch, so it will be easy to update when 1.0 GA is released.
Note that the hwloc tree embedded in opal/mca/paffinity/hwloc/hwloc is
identical to a hwloc distribution tarball, except that much of the
documentation was rm -rf'ed (because we don't need it for the embedded
case).
Since the paffinity framework currently does not understand hardware
threads, the hwloc component compensates for this by identifying cores
by the "first" hardware thread on that core. Hopefully we'll update
paffinity someday to understand hardware threads. :-)
configure grew a --with-hwloc option, analogous to what we do for many
other external libraries that OMPI supports. However, there's a new
feature: due to the request of several distros, OMPI can be configured
to build with its internal copy of hwloc or with an external copy of
hwloc (e.g., a system-installed hwloc).
1. If --with-hwloc is not specified, Open MPI will try to use its
internal copy (but silently fail/ignore hwloc if that fails).
1. If --with-hwloc=<dir> is supplied, Open MPI looks for hwloc
support in <dir> (and --with-hwloc-libdir=<dir>, if specified).
1. If --with-hwloc=external is supplied, Open MPI will look for hwloc
in a compiler/linker default external location.
1. If --with-hwloc=internal is supplied, Open MPI will use its
internal copy of hwloc.
Some of OMPI's main configury had to be slightly re-arranged in the
bootstrapping phase to accomodate hwloc's configry needs.
This commit was SVN r23125.
2010-05-13 23:56:05 +00:00
|
|
|
--with-hwloc=<location>
|
|
|
|
Build hwloc support. If <location> is "internal", Open MPI's
|
|
|
|
internal copy of hwloc is used. If <location> is "external", Open
|
|
|
|
MPI will search in default locations for an hwloc installation.
|
|
|
|
Finally, if <location> is a directory, that directory will be
|
|
|
|
searched for a valid hwloc installation, just like other
|
|
|
|
--with-FOO=<directory> configure options.
|
|
|
|
|
|
|
|
hwloc is a support library that provides processor and memory
|
|
|
|
affinity information for NUMA platforms.
|
|
|
|
|
|
|
|
--with-hwloc-libdir=<directory>
|
|
|
|
|
|
|
|
Look in directory for the hwloc libraries. This option is only
|
|
|
|
usable when building Open MPI against an external hwloc
|
|
|
|
installation. Just like other --with-FOO-libdir configure options,
|
|
|
|
this option is only needed for special configurations.
|
|
|
|
|
2009-12-16 01:17:02 +00:00
|
|
|
--with-knem=<directory>
|
|
|
|
Specify the directory where the knem libraries and header files are
|
|
|
|
located. This option is generally only necessary if the kenm headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
|
|
|
|
|
|
|
kenm is a Linux kernel module that allows direct process-to-process
|
|
|
|
memory copies (optionally using hardware offload), potentially
|
|
|
|
increasing bandwidth for large messages sent between messages on the
|
|
|
|
same server. See http://runtime.bordeaux.inria.fr/knem/ for
|
|
|
|
details.
|
|
|
|
|
2005-09-05 18:51:34 +00:00
|
|
|
--with-mx=<directory>
|
2005-08-08 15:25:59 +00:00
|
|
|
Specify the directory where the MX libraries and header files are
|
2008-11-15 15:27:05 +00:00
|
|
|
located. This option is generally only necessary if the MX headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
2005-08-08 15:25:59 +00:00
|
|
|
|
2010-01-14 19:21:41 +00:00
|
|
|
MX is the support library for Myrinet-based networks. An open
|
|
|
|
source software package named Open-MX provides the same
|
|
|
|
functionality on Ethernet-based clusters (Open-MX can provide
|
|
|
|
MPI performance improvements compared to TCP messaging).
|
2009-03-12 16:05:51 +00:00
|
|
|
|
2006-06-28 03:33:30 +00:00
|
|
|
--with-mx-libdir=<directory>
|
|
|
|
Look in directory for the MX libraries. By default, Open MPI will
|
|
|
|
look in <mx directory>/lib and <mx directory>/lib64, which covers
|
|
|
|
most cases. This option is only needed for special configurations.
|
|
|
|
|
2005-09-05 18:51:34 +00:00
|
|
|
--with-openib=<directory>
|
2006-11-10 17:41:42 +00:00
|
|
|
Specify the directory where the OpenFabrics (previously known as
|
2008-11-15 15:27:05 +00:00
|
|
|
OpenIB) libraries and header files are located. This option is
|
|
|
|
generally only necessary if the OpenFabrics headers and libraries
|
|
|
|
are not in default compiler/linker search paths.
|
2006-06-20 11:32:46 +00:00
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
"OpenFabrics" refers to iWARP- and InifiniBand-based networks.
|
|
|
|
|
2006-06-28 03:33:30 +00:00
|
|
|
--with-openib-libdir=<directory>
|
2006-11-10 17:41:42 +00:00
|
|
|
Look in directory for the OpenFabrics libraries. By default, Open
|
|
|
|
MPI will look in <openib directory>/lib and <openib
|
|
|
|
directory>/lib64, which covers most cases. This option is only
|
|
|
|
needed for special configurations.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-portals=<directory>
|
|
|
|
Specify the directory where the Portals libraries and header files
|
|
|
|
are located. This option is generally only necessary if the Portals
|
|
|
|
headers and libraries are not in default compiler/linker search
|
|
|
|
paths.
|
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
Portals is the support library for Cray interconnects, but is also
|
|
|
|
available on other platforms (e.g., there is a Portals library
|
|
|
|
implemented over regular TCP).
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-portals-config=<type>
|
|
|
|
Configuration to use for Portals support. The following <type>
|
|
|
|
values are possible: "utcp", "xt3", "xt3-modex" (default: utcp).
|
|
|
|
|
|
|
|
--with-portals-libs=<libs>
|
|
|
|
Additional libraries to link with for Portals support.
|
|
|
|
|
2006-11-10 17:41:42 +00:00
|
|
|
--with-psm=<directory>
|
2008-11-15 15:27:05 +00:00
|
|
|
Specify the directory where the QLogic InfiniPath PSM library and
|
|
|
|
header files are located. This option is generally only necessary
|
|
|
|
if the InfiniPath headers and libraries are not in default
|
|
|
|
compiler/linker search paths.
|
2006-11-10 17:41:42 +00:00
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
PSM is the support library for QLogic InfiniPath network adapters.
|
|
|
|
|
2006-11-10 17:41:42 +00:00
|
|
|
--with-psm-libdir=<directory>
|
|
|
|
Look in directory for the PSM libraries. By default, Open MPI will
|
|
|
|
look in <psm directory>/lib and <psm directory>/lib64, which covers
|
2006-06-28 03:33:30 +00:00
|
|
|
most cases. This option is only needed for special configurations.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-sctp=<directory>
|
|
|
|
Specify the directory where the SCTP libraries and header files are
|
|
|
|
located. This option is generally only necessary if the SCTP headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
SCTP is a special network stack over ethernet networks.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-sctp-libdir=<directory>
|
|
|
|
Look in directory for the SCTP libraries. By default, Open MPI will
|
|
|
|
look in <sctp directory>/lib and <sctp directory>/lib64, which covers
|
|
|
|
most cases. This option is only needed for special configurations.
|
|
|
|
|
2007-09-19 17:48:15 +00:00
|
|
|
--with-udapl=<directory>
|
|
|
|
Specify the directory where the UDAPL libraries and header files are
|
2008-11-15 15:27:05 +00:00
|
|
|
located. Note that UDAPL support is disabled by default on Linux;
|
|
|
|
the --with-udapl flag must be specified in order to enable it.
|
|
|
|
Specifying the directory argument is generally only necessary if the
|
|
|
|
UDAPL headers and libraries are not in default compiler/linker
|
|
|
|
search paths.
|
2007-09-19 17:48:15 +00:00
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
UDAPL is the support library for high performance networks in Sun
|
|
|
|
HPC ClusterTools and on Linux OpenFabrics networks (although the
|
|
|
|
"openib" options are preferred for Linux OpenFabrics networks, not
|
|
|
|
UDAPL).
|
|
|
|
|
2007-09-19 17:48:15 +00:00
|
|
|
--with-udapl-libdir=<directory>
|
|
|
|
Look in directory for the UDAPL libraries. By default, Open MPI
|
|
|
|
will look in <udapl directory>/lib and <udapl directory>/lib64,
|
|
|
|
which covers most cases. This option is only needed for special
|
|
|
|
configurations.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-lsf=<directory>
|
|
|
|
Specify the directory where the LSF libraries and header files are
|
|
|
|
located. This option is generally only necessary if the LSF headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
LSF is a resource manager system, frequently used as a batch
|
|
|
|
scheduler in HPC systems.
|
|
|
|
|
2009-05-06 19:30:57 +00:00
|
|
|
NOTE: If you are using LSF version 7.0.5, you will need to add
|
|
|
|
"LIBS=-ldl" to the configure command line. For example:
|
|
|
|
|
|
|
|
./configure LIBS=-ldl --with-lsf ...
|
|
|
|
|
|
|
|
This workaround should *only* be needed for LSF 7.0.5.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-lsf-libdir=<directory>
|
|
|
|
Look in directory for the LSF libraries. By default, Open MPI will
|
|
|
|
look in <lsf directory>/lib and <lsf directory>/lib64, which covers
|
|
|
|
most cases. This option is only needed for special configurations.
|
|
|
|
|
2005-10-10 19:13:54 +00:00
|
|
|
--with-tm=<directory>
|
|
|
|
Specify the directory where the TM libraries and header files are
|
2008-11-15 15:27:05 +00:00
|
|
|
located. This option is generally only necessary if the TM headers
|
|
|
|
and libraries are not in default compiler/linker search paths.
|
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
TM is the support library for the Torque and PBS Pro resource
|
|
|
|
manager systems, both of which are frequently used as a batch
|
|
|
|
scheduler in HPC systems.
|
|
|
|
|
2008-11-15 15:27:05 +00:00
|
|
|
--with-sge
|
|
|
|
Specify to build support for the Sun Grid Engine (SGE) resource
|
|
|
|
manager. SGE support is disabled by default; this option must be
|
|
|
|
specified to build OMPI's SGE support.
|
2005-10-10 19:13:54 +00:00
|
|
|
|
2009-03-12 16:05:51 +00:00
|
|
|
The Sun Grid Engine (SGE) is a resource manager system, frequently
|
|
|
|
used as a batch scheduler in HPC systems.
|
|
|
|
|
|
|
|
--with-esmtp=<directory>
|
|
|
|
|
|
|
|
Specify the directory where the libESMTP libraries and header files are
|
|
|
|
located. This option is generally only necessary of the libESMTP
|
|
|
|
headers and libraries are not included in the default
|
|
|
|
compiler/linker search paths.
|
|
|
|
|
|
|
|
libESMTP is a support library for sending e-mail.
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
--with-mpi-param_check(=value)
|
2006-04-25 21:49:50 +00:00
|
|
|
"value" can be one of: always, never, runtime. If --with-mpi-param
|
|
|
|
is not specified, "runtime" is the default. If --with-mpi-param
|
|
|
|
is specified with no value, "always" is used. Using
|
|
|
|
--without-mpi-param-check is equivalent to "never".
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
- always: the parameters of MPI functions are always checked for
|
|
|
|
errors
|
|
|
|
- never: the parameters of MPI functions are never checked for
|
|
|
|
errors
|
|
|
|
- runtime: whether the parameters of MPI functions are checked
|
|
|
|
depends on the value of the MCA parameter mpi_param_check
|
|
|
|
(default: yes).
|
|
|
|
|
|
|
|
--with-threads=value
|
2010-03-17 00:29:38 +00:00
|
|
|
Since thread support is only partially tested, it is disabled by
|
2005-08-05 12:21:32 +00:00
|
|
|
default. To enable threading, use "--with-threads=posix". This is
|
2010-03-17 00:29:38 +00:00
|
|
|
most useful when combined with --enable-mpi-thread-multiple.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2010-03-17 00:29:38 +00:00
|
|
|
--enable-mpi-thread-multiple
|
2005-08-05 12:21:32 +00:00
|
|
|
Allows the MPI thread level MPI_THREAD_MULTIPLE. See
|
2010-03-17 00:29:38 +00:00
|
|
|
--with-threads; this is currently disabled by default. Enabling
|
|
|
|
this feature will automatically --enable-opal-multi-threads.
|
2010-08-02 12:21:29 +00:00
|
|
|
|
|
|
|
--with-fca=<directory>
|
|
|
|
Specify the directory where the Voltaire FCA library and
|
|
|
|
header files are located.
|
|
|
|
|
|
|
|
FCA is the support library for Voltaire QDR switches.
|
|
|
|
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2010-03-17 00:29:38 +00:00
|
|
|
--enable-opal-multi-threads
|
|
|
|
Enables thread lock support in the OPAL and ORTE layers. Does
|
|
|
|
not enable MPI_THREAD_MULTIPLE - see above option for that feature.
|
|
|
|
This is currently disabled by default.
|
2005-08-05 12:21:32 +00:00
|
|
|
|
2005-10-25 12:18:49 +00:00
|
|
|
--disable-mpi-cxx
|
|
|
|
Disable building the C++ MPI bindings. Note that this does *not*
|
|
|
|
disable the C++ checks during configure; some of Open MPI's tools
|
|
|
|
are written in C++ and therefore require a C++ compiler to be built.
|
|
|
|
|
2006-10-16 13:22:22 +00:00
|
|
|
--disable-mpi-cxx-seek
|
|
|
|
Disable the MPI::SEEK_* constants. Due to a problem with the MPI-2
|
|
|
|
specification, these constants can conflict with system-level SEEK_*
|
|
|
|
constants. Open MPI attempts to work around this problem, but the
|
|
|
|
workaround may fail in some esoteric situations. The
|
|
|
|
--disable-mpi-cxx-seek switch disables Open MPI's workarounds (and
|
|
|
|
therefore the MPI::SEEK_* constants will be unavailable).
|
|
|
|
|
2005-10-25 12:18:49 +00:00
|
|
|
--disable-mpi-f77
|
2005-08-26 13:17:01 +00:00
|
|
|
Disable building the Fortran 77 MPI bindings.
|
|
|
|
|
2005-10-25 12:18:49 +00:00
|
|
|
--disable-mpi-f90
|
|