1
1
George Bosilca 3078be40aa First stable version of the MX BTL (at least we pass NetPipe). The perfs are not amazing
but are not that bad either.

On a 2 procs Intel(R) Xeon(TM) CPU 3.20GHz with MYRICOM Inc. Myrinet 2000 Scalable Cluster Interconnect (rev 04) I get:

  0:       1 bytes  13096 times -->      1.10 Mbps in       6.94 usec
  1:       2 bytes  14408 times -->      2.17 Mbps in       7.02 usec
  2:       3 bytes  14243 times -->      3.24 Mbps in       7.07 usec
  3:       4 bytes   9428 times -->      4.27 Mbps in       7.15 usec
  4:       6 bytes  10493 times -->      6.26 Mbps in       7.32 usec
  5:       8 bytes   6834 times -->      8.18 Mbps in       7.47 usec
  6:      12 bytes   8371 times -->     11.89 Mbps in       7.70 usec
  7:      13 bytes   5411 times -->     12.72 Mbps in       7.80 usec
  8:      16 bytes   5919 times -->     15.35 Mbps in       7.95 usec
  9:      19 bytes   7074 times -->     17.66 Mbps in       8.21 usec
 10:      21 bytes   7696 times -->     19.00 Mbps in       8.43 usec
 11:      24 bytes   7906 times -->     20.87 Mbps in       8.77 usec
 12:      27 bytes   8073 times -->     23.05 Mbps in       8.94 usec
 13:      29 bytes   4972 times -->     24.32 Mbps in       9.10 usec
 14:      32 bytes   5307 times -->     26.29 Mbps in       9.29 usec
 15:      35 bytes   5720 times -->     33.61 Mbps in       7.95 usec
 16:      45 bytes   7191 times -->     39.50 Mbps in       8.69 usec
 17:      48 bytes   7670 times -->     41.33 Mbps in       8.86 usec
 18:      51 bytes   7759 times -->     42.80 Mbps in       9.09 usec
 19:      61 bytes   4313 times -->     47.44 Mbps in       9.81 usec
 20:      64 bytes   5012 times -->     57.61 Mbps in       8.48 usec
 21:      67 bytes   6083 times -->     59.31 Mbps in       8.62 usec
 22:      93 bytes   6234 times -->     68.08 Mbps in      10.42 usec
 23:      96 bytes   6396 times -->     80.65 Mbps in       9.08 usec
 24:      99 bytes   7455 times -->     81.56 Mbps in       9.26 usec
 25:     125 bytes   3926 times -->    112.46 Mbps in       8.48 usec
 26:     128 bytes   5848 times -->    116.87 Mbps in       8.36 usec
 27:     131 bytes   6077 times -->    119.22 Mbps in       8.38 usec
 28:     189 bytes   6192 times -->    163.79 Mbps in       8.80 usec
 29:     192 bytes   7572 times -->    168.01 Mbps in       8.72 usec
 30:     195 bytes   7705 times -->    171.13 Mbps in       8.69 usec
 31:     253 bytes   4011 times -->    210.21 Mbps in       9.18 usec
 32:     256 bytes   5423 times -->    214.55 Mbps in       9.10 usec
 33:     259 bytes   5535 times -->    217.64 Mbps in       9.08 usec
 34:     381 bytes   5613 times -->    290.55 Mbps in      10.00 usec
 35:     384 bytes   6663 times -->    296.11 Mbps in       9.89 usec
 36:     387 bytes   6764 times -->    298.74 Mbps in       9.88 usec
 37:     509 bytes   3451 times -->    353.78 Mbps in      10.98 usec
 38:     512 bytes   4546 times -->    359.36 Mbps in      10.87 usec
 39:     515 bytes   4617 times -->    361.53 Mbps in      10.87 usec
 40:     765 bytes   4645 times -->    461.41 Mbps in      12.65 usec
 41:     768 bytes   5270 times -->    468.59 Mbps in      12.50 usec
 42:     771 bytes   5341 times -->    470.16 Mbps in      12.51 usec
 43:    1021 bytes   2695 times -->    508.42 Mbps in      15.32 usec
 44:    1024 bytes   3260 times -->    514.44 Mbps in      15.19 usec
 45:    1027 bytes   3298 times -->    515.72 Mbps in      15.19 usec
 46:    1533 bytes   3307 times -->    707.12 Mbps in      16.54 usec
 47:    1536 bytes   4030 times -->    714.93 Mbps in      16.39 usec
 48:    1539 bytes   4071 times -->    714.41 Mbps in      16.44 usec
 49:    2045 bytes   2040 times -->    761.38 Mbps in      20.49 usec
 50:    2048 bytes   2438 times -->    769.78 Mbps in      20.30 usec
 51:    2051 bytes   2465 times -->    769.78 Mbps in      20.33 usec
 52:    3069 bytes   2465 times -->    923.43 Mbps in      25.36 usec
 53:    3072 bytes   2629 times -->    928.48 Mbps in      25.24 usec
 54:    3075 bytes   2642 times -->    929.07 Mbps in      25.25 usec
 55:    4093 bytes   1323 times -->   1012.38 Mbps in      30.85 usec
 56:    4096 bytes   1620 times -->   1016.69 Mbps in      30.74 usec
 57:    4099 bytes   1627 times -->   1015.16 Mbps in      30.81 usec
 58:    6141 bytes   1625 times -->   1171.82 Mbps in      39.98 usec
 59:    6144 bytes   1667 times -->   1173.85 Mbps in      39.93 usec
 60:    6147 bytes   1669 times -->   1174.44 Mbps in      39.93 usec
 61:    8189 bytes    835 times -->   1232.43 Mbps in      50.69 usec
 62:    8192 bytes    986 times -->   1234.87 Mbps in      50.61 usec
 63:    8195 bytes    988 times -->   1234.85 Mbps in      50.63 usec
 64:   12285 bytes    988 times -->   1360.73 Mbps in      68.88 usec
 65:   12288 bytes    967 times -->   1364.20 Mbps in      68.72 usec
 66:   12291 bytes    970 times -->   1364.56 Mbps in      68.72 usec
 67:   16381 bytes    485 times -->   1385.48 Mbps in      90.21 usec
 68:   16384 bytes    554 times -->   1388.76 Mbps in      90.01 usec
 69:   16387 bytes    555 times -->   1388.41 Mbps in      90.05 usec
 70:   24573 bytes    555 times -->   1499.72 Mbps in     125.01 usec
 71:   24576 bytes    533 times -->   1499.36 Mbps in     125.05 usec
 72:   24579 bytes    533 times -->   1500.44 Mbps in     124.98 usec
 73:   32765 bytes    266 times -->   1499.31 Mbps in     166.73 usec
 74:   32768 bytes    299 times -->   1497.10 Mbps in     166.99 usec
 75:   32771 bytes    299 times -->   1495.29 Mbps in     167.21 usec
 76:   49149 bytes    299 times -->   1528.78 Mbps in     245.28 usec
 77:   49152 bytes    271 times -->   1527.97 Mbps in     245.42 usec
 78:   49155 bytes    271 times -->   1529.35 Mbps in     245.22 usec
 79:   65533 bytes    135 times -->   1586.19 Mbps in     315.21 usec
 80:   65536 bytes    158 times -->   1591.11 Mbps in     314.25 usec
 81:   65539 bytes    159 times -->   1586.50 Mbps in     315.17 usec
 82:   98301 bytes    158 times -->   1668.05 Mbps in     449.61 usec
 83:   98304 bytes    148 times -->   1667.40 Mbps in     449.80 usec
 84:   98307 bytes    148 times -->   1667.29 Mbps in     449.84 usec
 85:  131069 bytes     74 times -->   1709.11 Mbps in     585.09 usec
 86:  131072 bytes     85 times -->   1711.09 Mbps in     584.42 usec
 87:  131075 bytes     85 times -->   1710.92 Mbps in     584.49 usec
 88:  196605 bytes     85 times -->   1727.93 Mbps in     868.08 usec
 89:  196608 bytes     76 times -->   1726.28 Mbps in     868.92 usec
 90:  196611 bytes     76 times -->   1727.06 Mbps in     868.54 usec
 91:  262141 bytes     38 times -->   1757.65 Mbps in    1137.87 usec
 92:  262144 bytes     43 times -->   1758.69 Mbps in    1137.21 usec
 93:  262147 bytes     43 times -->   1759.38 Mbps in    1136.78 usec
 94:  393213 bytes     43 times -->   1801.51 Mbps in    1665.25 usec
 95:  393216 bytes     40 times -->   1803.26 Mbps in    1663.65 usec
 96:  393219 bytes     40 times -->   1800.73 Mbps in    1666.00 usec
 97:  524285 bytes     20 times -->   1805.33 Mbps in    2215.65 usec
 98:  524288 bytes     22 times -->   1806.80 Mbps in    2213.86 usec
 99:  524291 bytes     22 times -->   1805.77 Mbps in    2215.14 usec
100:  786429 bytes     22 times -->   1827.24 Mbps in    3283.64 usec
101:  786432 bytes     20 times -->   1827.03 Mbps in    3284.03 usec
102:  786435 bytes     20 times -->   1827.20 Mbps in    3283.73 usec
103: 1048573 bytes     10 times -->   1840.05 Mbps in    4347.71 usec
104: 1048576 bytes     11 times -->   1839.68 Mbps in    4348.58 usec
105: 1048579 bytes     11 times -->   1840.13 Mbps in    4347.54 usec
106: 1572861 bytes     11 times -->   1853.99 Mbps in    6472.50 usec
107: 1572864 bytes     10 times -->   1854.11 Mbps in    6472.10 usec
108: 1572867 bytes     10 times -->   1854.12 Mbps in    6472.10 usec
109: 2097149 bytes      5 times -->   1861.41 Mbps in    8595.61 usec
110: 2097152 bytes      5 times -->   1861.25 Mbps in    8596.40 usec
111: 2097155 bytes      5 times -->   1860.99 Mbps in    8597.59 usec
112: 3145725 bytes      5 times -->   1868.34 Mbps in   12845.59 usec
113: 3145728 bytes      5 times -->   1868.30 Mbps in   12845.90 usec
114: 3145731 bytes      5 times -->   1868.59 Mbps in   12843.89 usec
115: 4194301 bytes      3 times -->   1872.16 Mbps in   17092.51 usec
116: 4194304 bytes      3 times -->   1872.31 Mbps in   17091.19 usec
117: 4194307 bytes      3 times -->   1872.13 Mbps in   17092.82 usec
118: 6291453 bytes      3 times -->   1875.88 Mbps in   25588.00 usec
119: 6291456 bytes      3 times -->   1875.98 Mbps in   25586.68 usec
120: 6291459 bytes      3 times -->   1875.93 Mbps in   25587.36 usec
121: 8388605 bytes      3 times -->   1877.79 Mbps in   34082.69 usec
122: 8388608 bytes      3 times -->   1877.72 Mbps in   34083.84 usec
123: 8388611 bytes      3 times -->   1877.66 Mbps in   34085.00 usec

This commit was SVN r7180.
2005-09-04 22:08:13 +00:00
2005-09-04 20:55:27 +00:00
2005-07-03 23:09:55 +00:00
2005-08-01 22:33:53 +00:00
2005-06-09 01:18:46 +00:00
2005-08-30 16:15:01 +00:00
2005-05-12 17:56:42 +00:00

Copyright (c) 2004-2005 The Trustees of Indiana University.
                        All rights reserved.
Copyright (c) 2004-2005 The Trustees of the University of Tennessee.
                        All rights reserved.
Copyright (c) 2004-2005 High Performance Computing Center Stuttgart, 
                        University of Stuttgart.  All rights reserved.
Copyright (c) 2004-2005 The Regents of the University of California.
                        All rights reserved.
$COPYRIGHT$

Additional copyrights may follow

$HEADER$

===========================================================================
This is a preliminary README file.  It will be scrubbed formally
before release.
===========================================================================

The best way to report bugs, send comments, or ask questions is to
sign up on the user's and/or developer's mailing list (for user-level
and developer-level questions; when in doubt, send to the user's
list):

        users@open-mpi.org
        devel@open-mpi.org

Because of spam, only subscribers are allowed to post to these lists
(ensure that you subscribe with and post from exactly the same e-mail
address -- joe@example.com is considered different than
joe@mycomputer.example.com!).  Visit these pages to subscribe to the
lists:

     http://www.open-mpi.org/mailman/listinfo.cgi/users
     http://www.open-mpi.org/mailman/listinfo.cgi/devel

Thanks for your time.

===========================================================================

The following abbreviated list of release notes applies to this code
base as of this writing (26 Aug 2005):

- Open MPI includes support for a wide variety of supplemental
  hardware and software package.  When configuring Open MPI, you may
  need to supply additional flags to the "configure" script in order
  to tell Open MPI where the header files, libraries, and any other
  required files are located.  As such, running "configure" by itself
  may include support for all the devices (etc.) that you expect,
  especially if their support headers / libraries are installed in
  non-standard locations.  Network interconnects are an easy example
  to discuss -- Myrinet and Infiniband, for example, both have
  supplemental headers and libraries that must be found before Open
  MPI can build support for them.  You must specify where these files
  are with the appropriate options to configure.  See the listing of
  configure command-line switches, below, for more details.

- The Open MPI installation must be in your PATH on all nodes (and
  potentially LD_LIBRARY_PATH, if libmpi is a shared library).

- LAM/MPI-like mpirun notation of "C" and "N" is not yet supported.

- Shared memory support will not function properly on machines that
  have a weak memory consistency mode.  The default in this beta is to
  disable shared memory support on all Power PC architectures, even
  though some Power PC platforms have strong memory consistency
  models.  See the description of the --enable-ptl-sm configure flag,
  below.

- Striping MPI messages across multiple networks is supported (and
  happens automatically when multiple networks are available), but
  needs performance tuning.

- The run-time systems that are currently supported are:
  - rsh / ssh
  - Recent versions of BProc
  - PBS Pro, Open PBS, Torque (i.e., anything who supports the TM
    interface)
  - SLURM
  - POE

- Complete user and system administrator documentation is missing
  (this file comprises the majority of the current user
  documentation).

- MPI-2 one-sided functionality will not be included in the first few
  releases of Open MPI.

- Systems that have been tested are:
  - Linux, 32 bit, with gcc
  - Linux, 64 bit (x86), with gcc
  - OS X (10.3), 32 bit, with gcc
  - OS X (10.4), 32 bit, with gcc

- Other systems have been lightly (but not fully tested):
  - Other compilers on Linux, 32 bit
  - Other 64 bit platforms (PPC64, Sparc)

- There are some cases where after running MPI applications, the
  directory /tmp/openmpi-sessions-<username>@<hostname>* will exist
  (but will likely be empty).  It is safe to remove after the run is
  complete.

- The MPI and run-time layers do not free all used memory properly
  during MPI_FINALIZE.

- Running on nodes with different endian and/or different datatype
  sizes within a single parallel application is not supported in this
  beta.

- Threading support (both asynchronous progress and
  MPI_THREAD_MULTIPLE) is included, but is only lightly tested.

- Due to limitations in the Libtool 1.5 series, Fortran 90 MPI
  bindings support can only be built as a static library.  It is
  expected that Libtool 2.0 will be able to support shared libraries
  for the Fortran 90 bindings.

- On Linux, if either the malloc_hooks or malloc_interpose memory
  hooks are enabled, it will not be possible to link against a static
  libc.a.  libmpi can still be built statically - it is only the final
  application link step that can not be static.  If applications must be
  statically linked, it is recommended you compile Open MPI with the
  --without-memory-manager configure option.

===========================================================================

Building Open MPI
-----------------

Open MPI uses a traditional configure script paired with "make" to
build.  Typical installs can be of the pattern:

---------------------------------------------------------------------------
shell$ ./configure [...options...]
shell$ make all install
---------------------------------------------------------------------------

There are many available configure options (see "./configure --help"
for a full list); a summary of the more important ones follows:

--prefix=<directory>
  Install Open MPI into the base directory named <directory>.  Hence,
  Open MPI will place its executables in <directory>/bin, its header
  files in <directory>/include, its libraries in <directory>/lib, etc.

--with-btl-gm=<directory>
  Specify the directory where the GM libraries and header files are
  located.  This enables GM support in Open MPI.

--with-btl-mx=<directory>
  Specify the directory where the MX libraries and header files are
  located.  This enables MX support in Open MPI.

--with-btl-mvapi=<directory>
  Specify the directory where the mVAPI libraries and header files are
  located.  This enables mVAPI support in Open MPI.

--with-btl-openib=<directory>
  Specify the directory where the Open IB libraries and header files are
  located.  This enables mVAPI support in Open MPI.

--with-mpi-param_check(=value)
  "value" can be one of: always, never, runtime.  If no value is
  specified, or this option is not used, "always" is the default.
  Using --without-mpi-param-check is equivalent to "never".
  - always: the parameters of MPI functions are always checked for
    errors 
  - never: the parameters of MPI functions are never checked for
    errors 
  - runtime: whether the parameters of MPI functions are checked
    depends on the value of the MCA parameter mpi_param_check
    (default: yes).

--with-threads=value
  Since thread support (both support for MPI_THREAD_MULTIPLE and
  asynchronous progress) is only partially tested, it is disabled by
  default.  To enable threading, use "--with-threads=posix".  This is
  most useful when combined with --enable-mpi-threads and/or
  --enable-progress-threads.

--enable-mpi-threads
  Allows the MPI thread level MPI_THREAD_MULTIPLE.  See
  --with-threads; this is currently disabled by default.

--enable-progress-threads
  Allows asynchronous progress in some transports.  See
  --with-threads; this is currently disabled by default.

--disable-f77
  Disable building the Fortran 77 MPI bindings.

--disable-f90
  Disable building the Fortran 90 MPI bindings.  Also related to the
  --with-f90-max-array-dim option.

--with-f90-max-array-dim=<DIM>
  The F90 MPI bindings are stictly typed, even including the number of
  dimensions for arrays for MPI choice buffer parameters.  Open MPI
  generates these bindings at compile time with a maximum number of
  dimensions as specified by this parameter.  The default value is 4.

--disable-shared
  By default, libmpi is built as a shared library, and all components
  are built as dynamic shared objects (DSOs).  This switch disables
  this default; it is really only useful when used with
  --enable-static.

--enable-static
  Build libmpi as a static library, and statically link in all
  components.

There are several other options available -- see "./configure --help".

Open MPI supports all the "make" targets that are provided by GNU
Automake, such as:

all       - build the entire Open MPI package
install   - install Open MPI
uninstall - remove all traces of Open MPI from the $prefix
clean     - clean out the build tree

Once Open MPI has been built and installed, it is safe to run "make
clean" and/or remove the entire build tree.

VPATH builds are fully supported.

Generally speaking, the only thing that users need to do to use Open
MPI is ensure that <prefix>/bin is in their PATH.  Users may need to
ensure that this directory is set in their PATH in their shell setup
files (e.g., .bashrc, .cshrc) so that rsh/ssh-based logins will be
able to find the Open MPI executables.

Setting LD_LIBRARY_PATH is typically not necessary, but in some cases,
if libmpi.so cannot be found when MPI applications are run,
<prefix>/lib should be added to LD_LIBRARY_PATH.

===========================================================================

Checking Your Open MPI Installation
-----------------------------------

The "ompi_info" command can be used to check the status of your Open
MPI installation (located in <prefix>/bin/ompi_info).  Running it with
no arguments provides a summary of information about your Open MPI
installation.   

Note that the ompi_info command is extremely helpful in determining
which components are installed as well as listing all the run-time
settable parameters that are available in each component (as well as
their default values).

The following options may be helpful:

--all       Show a *lot* of information about your Open MPI
            installation. 
--parsable  Display all the information in an easily
            grep/cut/awk/sed-able format.
--param <framework> <component>
            A <framework> of "all" and a <component> of "all" will
            show all parameters to all components.  Otherwise, the
            parameters of all the components in a specific framework,
            or just the parameters of a specific component can be
            displayed by using an appropriate <framework> and/or
            <component> name.

Changing the values of these parameters is explained in the "The
Modular Component Architecture (MCA)" section, below.

===========================================================================

Compiling Open MPI Applications
-------------------------------

Open MPI provides "wrapper" compilers that should be used for
compiling MPI applications:

C:          mpicc
C++:        mpiCC (or mpic++ if your filesystem is case-insensitive)
Fortran 77: mpif77
Fortran 90: mpif90

For example:

shell$ mpicc hello_world_mpi.c -o hello_world_mpi -g
shell$

All the wrapper compilers do is add a variety of compiler and linker
flags to the command line and then invoke a back-end compiler.  The
end result is an MPI executable that is properly linked to all the
relevant libraries.

===========================================================================

Running Open MPI Applications
-----------------------------

Open MPI supports both mpirun and mpiexec (they are actually the
same).  For example:

shell$ mpirun -np 2 hello_world_mpi

or

shell$ mpiexec -np 1 hello_world_mpi : -np 1 hello_world_mpi

are equivalent.  Many of mpiexec's switches (such as -host and -arch)
are not yet functional, although they will not error if you try to use
them.  

Since rsh is probably the launcher that you will be using (if you are
outside of Los Alamos National Laboratory), you can also specify a
-hostfile parameter, indicating an standard mpirun-style hostfile (one
hostname per line):

shell$ mpirun -hostfile my_hostfile -np 2 hello_world_mpi

If you intend to run more than one process on a node, the hostfile can
use the "slots" attribute.  If "slots" is not specified, a count of 1
is assumed.  For example, using the following hostfile:

---------------------------------------------------------------------------
node1.example.com
node2.example.com
node3.example.com slots=2
node4.example.com slots=4
---------------------------------------------------------------------------

shell$ mpirun -hostfile my_hostfile -np 8 hello_world_mpi

will launch MPI_COMM_WORLD rank 0 on node1, rank 1 on node2, ranks 2
and 3 on node3, and ranks 4 through 7 on node4.

Note that the values of component parameters can be changed on the
mpirun / mpiexec command line.  This is explained in the section
below, "The Modular Component Architecture (MCA)".

===========================================================================

The Modular Component Architecture (MCA)

The MCA is the backbone of Open MPI -- most services and functionality
are implemented through MCA components.  Here is a list of all the
component frameworks in Open MPI:

---------------------------------------------------------------------------
MPI component frameworks:
-------------------------

allocator - Memory allocator
bml       - BTL management layer
btl       - MPI point-to-point byte transfer layer
coll      - MPI collective algorithms
io        - MPI-2 I/O
mpool     - Memory pooling
pml       - MPI point-to-point management layer
topo      - MPI topology routines

Back-end run-time environment component frameworks:
---------------------------------------------------

errmgr    - RTE error manager
gpr       - General purpose registry
iof       - I/O forwarding
ns        - Name server
oob       - Out of band messaging
pls       - Process launch system
ras       - Resource allocation system
rds       - Resource discovery system
rmaps     - Resource mapping system
rmgr      - Resource manager
rml       - RTE message layer
soh       - State of health monitor

Miscellaneous frameworks:
-------------------------

maffinity - Memory affinity
memory    - Memory subsystem hooks
paffinity - Processor affinity
timer     - High-resolution timers

---------------------------------------------------------------------------

Each framework typically has one or more components that are used at
run-time.  For example, the ptl framework is used by MPI to send bytes
across underlying networks.  The tcp ptl, for example, sends messages
across TCP-based networks; the gm ptl sends messages across GM
Myrinet-based networks.

Each component typically has some tunable parameters that can be
changed at run-time.  Use the ompi_info command to check a component
to see what its tunable parameters are.  For example:

shell$ ompi_info --param btl tcp

shows all the parameters (and default values) for the tcp btl
component.

These values can be overridden at run-time in several ways.  At
run-time, the following locations are examined (in order) for new
values of parameters:

1. <prefix>/etc/openmpi-mca-params.conf

   This file is intended to set any system-wide default MCA parameter
   values -- it will apply, by default, to all users who use this Open
   MPI installation.  The default file that is installed contains many
   comments explaining its format.

2. $HOME/.openmpi/mca-params.conf

   If this file exists, it should be in the same format as
   <prefix>/etc/openmpi-mca-params.conf.  It is intended to provide
   per-user default parameter values.

3. environment variables of the form OMPI_MCA_<name> set equal to a
   <value>

   Where <name> is the name of the parameter.  For example, set the
   variable named OMPI_MCA_btl_tcp_frag_size to the value 65536
   (Bourne-style shells):

   shell$ OMPI_MCA_btl_tcp_frag_size=65536
   shell$ export OMPI_MCA_btl_tcp_frag_size

4. the mpirun command line: --mca <name> <value>
 
   Where <name> is the name of the parameter.  For example:

   shell$ mpirun --mca btl_tcp_frag_size 65536 -np 2 hello_world_mpi

These locations are checked in order.  For example, a parameter value
passed on the mpirun command line will override an environment
variable; an environment variable will override the system-wide
defaults.

===========================================================================

Common Questions
----------------

Many common questions about building and using Open MPI are answered
on the FAQ:

    http://www.open-mpi.org/faq/

===========================================================================

Got more questions?
-------------------

Found a bug?  Got a question?  Want to make a suggestion?  Want to
contribute to Open MPI?  Please let us know!

User-level questions and comments should generally be sent to the
user's mailing list (users@open-mpi.org).  Because of spam, only
subscribers are allowed to post to this list (ensure that you
subscribe with and post from *exactly* the same e-mail address --
joe@example.com is considered different than
joe@mycomputer.example.com!).  Visit this page to subscribe to the
user's list:

     http://www.open-mpi.org/mailman/listinfo.cgi/users

Developer-level bug reports, questions, and comments should generally
be sent to the developer's mailing list (devel@open-mpi.org).  Please
do not post the same question to both lists.  As with the user's list,
only subscribers are allowed to post to the developer's list.  Visit
the following web page to subscribe:

     http://www.open-mpi.org/mailman/listinfo.cgi/devel

When submitting bug reports to either list, be sure to include the
following information in your mail (please compress!):

- the stdout and stderr from Open MPI's configure
- the top-level config.log file
- the stdout and stderr from building Open MPI
- the output from "ompi_info --all" (if possible)

For Bourne-type shells, here's one way to capture this information:

shell$ ./configure ... 2>&1 | tee config.out
[...lots of configure output...]
shell$ make 2>&1 | tee make.out
[...lots of make output...]
shell$ mkdir ompi-output
shell$ cp config.out config.log make.out ompi-output
shell$ ompi_info --all |& tee ompi-output/ompi-info.out
shell$ tar cvf ompi-output.tar ompi-output
[...output from tar...]
shell$ gzip ompi-output.tar

For C shell-type shells, the procedure is only slightly different:

shell% ./configure ... |& tee config.out
[...lots of configure output...]
shell% make |& tee make.out
[...lots of make output...]
shell% mkdir ompi-output
shell% cp config.out config.log make.out ompi-output
shell% ompi_info --all |& tee ompi-output/ompi-info.out
shell% tar cvf ompi-output.tar ompi-output
[...output from tar...]
shell% gzip ompi-output.tar

In either case, attach the resulting ompi-output.tar.gz file to your
mail.  This provides the Open MPI developers with a lot of information
about your installation and can greatly assist us in helping with your
problem.

Be sure to also include any other useful files (in the
ompi-output.tar.gz tarball), such as output showing specific errors.
Описание
Open MPI main development repository (BSD license)
https://github.com/open-mpi/ompi
Readme 130 MiB
Languages
C 82.4%
Fortran 3.8%
Roff 3.7%
Shell 3.3%
Makefile 2.3%
Разное 4.5%