1
1
Граф коммитов

158 Коммитов

Автор SHA1 Сообщение Дата
George Bosilca
c31cc5b270 Remove a warning about line being unused.
This commit was SVN r18472.
2008-05-21 20:46:22 +00:00
Rolf vandeVaart
74d0259480 Add new implentation of barrier. This shows better performance on some clusters.
However, no decision logic is changed by this commit so default behavior has not changed.  This
is only selectable by runtime parameters.

This commit was SVN r18464.
2008-05-20 17:37:41 +00:00
Jeff Squyres
7154776465 Removed unused variable / compiler warning.
This commit was SVN r18454.
2008-05-19 13:41:45 +00:00
Jeff Squyres
e7ecd56bd2 This commit represents a bunch of work on a Mercurial side branch. As
such, the commit message back to the master SVN repository is fairly
long.

= ORTE Job-Level Output Messages =

Add two new interfaces that should be used for all new code throughout
the ORTE and OMPI layers (we already make the search-and-replace on
the existing ORTE / OMPI layers):

 * orte_output(): (and corresponding friends ORTE_OUTPUT,
   orte_output_verbose, etc.)  This function sends the output directly
   to the HNP for processing as part of a job-specific output
   channel.  It supports all the same outputs as opal_output()
   (syslog, file, stdout, stderr), but for stdout/stderr, the output
   is sent to the HNP for processing and output.  More on this below.
 * orte_show_help(): This function is a drop-in-replacement for
   opal_show_help(), with two differences in functionality:
   1. the rendered text help message output is sent to the HNP for
      display (rather than outputting directly into the process' stderr
      stream)
   1. the HNP detects duplicate help messages and does not display them
      (so that you don't see the same error message N times, once from
      each of your N MPI processes); instead, it counts "new" instances
      of the help message and displays a message every ~5 seconds when
      there are new ones ("I got X new copies of the help message...")

opal_show_help and opal_output still exist, but they only output in
the current process.  The intent for the new orte_* functions is that
they can apply job-level intelligence to the output.  As such, we
recommend that all new ORTE and OMPI code use the new orte_*
functions, not thei opal_* functions.

=== New code ===

For ORTE and OMPI programmers, here's what you need to do differently
in new code:

 * Do not include opal/util/show_help.h or opal/util/output.h.
   Instead, include orte/util/output.h (this one header file has
   declarations for both the orte_output() series of functions and
   orte_show_help()).
 * Effectively s/opal_output/orte_output/gi throughout your code.
   Note that orte_output_open() takes a slightly different argument
   list (as a way to pass data to the filtering stream -- see below),
   so you if explicitly call opal_output_open(), you'll need to
   slightly adapt to the new signature of orte_output_open().
 * Literally s/opal_show_help/orte_show_help/.  The function signature
   is identical.

=== Notes ===

 * orte_output'ing to stream 0 will do similar to what
   opal_output'ing did, so leaving a hard-coded "0" as the first
   argument is safe.
 * For systems that do not use ORTE's RML or the HNP, the effect of
   orte_output_* and orte_show_help will be identical to their opal
   counterparts (the additional information passed to
   orte_output_open() will be lost!).  Indeed, the orte_* functions
   simply become trivial wrappers to their opal_* counterparts.  Note
   that we have not tested this; the code is simple but it is quite
   possible that we mucked something up.

= Filter Framework =

Messages sent view the new orte_* functions described above and
messages output via the IOF on the HNP will now optionally be passed
through a new "filter" framework before being output to
stdout/stderr.  The "filter" OPAL MCA framework is intended to allow
preprocessing to messages before they are sent to their final
destinations.  The first component that was written in the filter
framework was to create an XML stream, segregating all the messages
into different XML tags, etc.  This will allow 3rd party tools to read
the stdout/stderr from the HNP and be able to know exactly what each
text message is (e.g., a help message, another OMPI infrastructure
message, stdout from the user process, stderr from the user process,
etc.).

Filtering is not active by default.  Filter components must be
specifically requested, such as:

{{{
$ mpirun --mca filter xml ...
}}}

There can only be one filter component active.

= New MCA Parameters =

The new functionality described above introduces two new MCA
parameters:

 * '''orte_base_help_aggregate''': Defaults to 1 (true), meaning that
   help messages will be aggregated, as described above.  If set to 0,
   all help messages will be displayed, even if they are duplicates
   (i.e., the original behavior).
 * '''orte_base_show_output_recursions''': An MCA parameter to help
   debug one of the known issues, described below.  It is likely that
   this MCA parameter will disappear before v1.3 final.

= Known Issues =

 * The XML filter component is not complete.  The current output from
   this component is preliminary and not real XML.  A bit more work
   needs to be done to configure.m4 search for an appropriate XML
   library/link it in/use it at run time.
 * There are possible recursion loops in the orte_output() and
   orte_show_help() functions -- e.g., if RML send calls orte_output()
   or orte_show_help().  We have some ideas how to fix these, but
   figured that it was ok to commit before feature freeze with known
   issues.  The code currently contains sub-optimal workarounds so
   that this will not be a problem, but it would be good to actually
   solve the problem rather than have hackish workarounds before v1.3 final.

This commit was SVN r18434.
2008-05-13 20:00:55 +00:00
Rolf vandeVaart
0e32dd1022 Add MPI_Alltoallv to tuned collectives and add a pairwise implementation of MPI_Alltoallv. However, do not change the default behavior for now. The only way to use new pairwise implementation is via mca parameters.
This commit was SVN r18394.
2008-05-07 02:31:24 +00:00
Jeff Squyres
213b5d5c6e Per long threads on the mailing list and much confusion discussion
about linkers, have all OPAL, ORTE, and OMPI components '''not'' link
against the OPAL, ORTE, or OMPI libraries.

See ttp://www.open-mpi.org/community/lists/users/2007/10/4220.php for
details (or https://svn.open-mpi.org/trac/ompi/wiki/Linkers for a
better-formatted version of the same info).

This commit was SVN r16968.
2007-12-15 13:32:02 +00:00
George Bosilca
1e7a791349 Remove some of the problems identified by Coverty.
This commit was SVN r16112.
2007-09-12 20:13:26 +00:00
Shiqing Fan
a0660f4deb - Just some type casts.
This commit was SVN r16100.
2007-09-12 15:29:58 +00:00
Brian Barrett
af4e86c25f Update collectives selection logic to allow for multiple components to be
used at nce (up to one unique collective module per collective function).
Matches r15795:15921 of the tmp/bwb-coll-select branch

This commit was SVN r15924.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r15795
  r15921
2007-08-19 03:37:49 +00:00
Jelena Pjesivac-Grbovic
9bd9c92dbd Making sure that the decision function for scatter and gather correctly
computes everything for MPI_IN_PLACE case.

This commit was SVN r15841.
2007-08-13 17:35:50 +00:00
Jelena Pjesivac-Grbovic
b558e820cb removing compiler wraning
This commit was SVN r15803.
2007-08-08 15:22:01 +00:00
Jelena Pjesivac-Grbovic
daa10b277e modifying scatter decision function to use binomial algorithm for
small message sizes.

This commit was SVN r15798.
2007-08-07 22:16:13 +00:00
Jelena Pjesivac-Grbovic
1b66a52c50 Modifying type of binomial tree used for binomial reduce:
switching:
       0                         0
     / \ \                     / \ \
	1    \ \         -->       4   \ \
  /      \ \                 /     \ \
 3       2  \               3       2 \
             4                         1
(duh).  The first form is the bmtree suitable for bcast, but the latter is better for reduce.
Updating default decision function accordingly.

This commit was SVN r15422.
2007-07-13 21:07:51 +00:00
Jelena Pjesivac-Grbovic
d677db9b5f cleaning up alltoall implementation:
- removing MPI_* calls from bruck implementation
- simplifying 2 process case
- identation, etc.

This commit was SVN r15301.
2007-07-07 01:06:19 +00:00
Jelena Pjesivac-Grbovic
483222085e Fixing compiler warnings.
In gather, the ptmp += incr is irrelevant, since ptmp is set within the loop.

This commit was SVN r15293.
2007-07-05 20:40:50 +00:00
Jelena Pjesivac-Grbovic
3b0a52a104 adding tuned allgatherv implementation using bruck, ring, and neighbor-exchange algorithms.
The implementations passed intel and imb tests up to 40 processes.

This commit was SVN r15280.
2007-07-03 23:33:12 +00:00
Jelena Pjesivac-Grbovic
d55b415bb0 fixing typo
This commit was SVN r15240.
2007-06-28 20:56:55 +00:00
Jelena Pjesivac-Grbovic
8fc8b44d11 Modifying reduce decision function for large, single element reduces (again).
Binary algorithm without segmentation tends to outperform binomial algorithm 
in this case.

This commit was SVN r15226.
2007-06-27 22:01:56 +00:00
Jelena Pjesivac-Grbovic
0ecef1750d Modifying the default reduce decision function to use binomial algorithm
for single-element reduce (segmented algorithms make no sense in this case
and can cause performance degradation). 

This commit was SVN r15209.
2007-06-26 20:14:03 +00:00
Jelena Pjesivac-Grbovic
567b40b9a9 Modifying the default broadcast decision function to use binomial algorithm
for single-element broadcasts (segmented algorithms make no sense in this case
and can cause performance degradation).

This commit was SVN r15208.
2007-06-26 20:08:31 +00:00
Jelena Pjesivac-Grbovic
3740640711 Modifying MPI_Gather in tuned module:
- adding linear algorithm with synchronization for gather.
  This algorithm prevents congestion at root process, but introduces 
  synchronization (serializes non-root processes, but allows messages 
  to arrive from two processes at the same time).  
  It performed better than binomial and linear algorithms for large message, 
  and intermediate and large communicator sizes.
- Updating MPI_Gather decision function to reflect performance results
  from MX.  I will perform more measurements though - so this one can 
  change.

This commit was SVN r15165.
2007-06-21 20:00:36 +00:00
Jelena Pjesivac-Grbovic
625c6739ab Removing warning about unsed variable
This commit was SVN r14579.
2007-05-03 20:26:41 +00:00
Jelena Pjesivac-Grbovic
9eff74ad4d Modifying generalized reduce "synchronized" behavior:
- Removing "small" message size limit because it really does not relate to the eager size
accross the board.
Now, the leaf nodes in generalized reduce will use blocking send (DEFAULT/ORIGINAL BEHAVIOR) 
either when the maximum number of outstanding requests is 0 or 
when the total number of segments is less than the maximum number of outstanding requests.
Otherwise, it will send messages using non-blocking synchronized send operation.

This commit was SVN r14572.
2007-05-02 21:42:45 +00:00
George Bosilca
69642a9cd4 Remove 2 warnings about ptrdiff_t to unsigned long implicit conversion.
This commit was SVN r14565.
2007-05-01 19:47:33 +00:00
Jelena Pjesivac-Grbovic
3eac49aa59 Adding flow control for leaf nodes in generalized reduce structure.
This "feature" is disabled by default and it should not affect the current performance.

In case when the message size is large and segment size is smaller than eager size for particular interface,
the leaf nodes in generalized reduce function can overflood parent nodes by sending all segments without 
any synchronization.  This can cause the parent to have HIGH number of unexpected messages (think 16MB 
message with 1KB segments for example).  In case of binomial algorithm root node always has at least one
child which is leaf, so this can potentially affect the root's performance significantly [Especially in 
large communicators where root may have quite a few children (binomial tree for example)].
When the segment size is bigger than the eager size, rendezvous protocol ensures that this does 
not happen so it is not necessary.
Originally, the problem was exposed in "infinite" bucket allocator clean up time for "small" segment sizes
(which may explain some "deadlocks" on Thunderbird tests).

To prevent this, we allow user to specify mca parameter "--mca coll_tuned_reduce_algorithm_max_requests NUM"
this limits number of outstanding messages from a leaf node in generalized reduce to the parent to NUM.
Messages are sent as non-blocking synchrnous messages, so syncronization happens at "wait" time.
The synchronization actually improved performance of pipeline and binomial algorithm for large message sizes
with 1KB segments over MX, but I need to test it some more to make sure it is consistent.

Since there is no easy way to find out what is "the eager" size for particular btl, I set the limit to 4000B.
If message/individual segment size is greater than 4000B - we will not use this feature.  This variable may
or may not be exposed as mca parameter later...

I did not have any problems running it and both "default" and "synchronous" tests passed Intel Reduce* tests 
up to 80 processes (over MX).

This commit was SVN r14518.
2007-04-25 20:39:53 +00:00
Jelena Pjesivac-Grbovic
53cbec7a09 Make coll/tuned dynamic rules more verbose (when promted with --mca coll_base_verbose 1)
This commit was SVN r14469.
2007-04-23 16:34:52 +00:00
Jeff Squyres
51f286d737 Just like r14289 on the ORTE trunk:
Per discussions with Brian and Ralph, make a slight correction in
where components are installed. Use $pkglibdir, not $libdir/openmpi,
so that when compiled in the orte trunk, components are installed to
the right directory (because the component search patch is checking
$pkglibdir).

This commit was SVN r14345.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r14289
2007-04-12 11:19:42 +00:00
George Bosilca
120cf76ad8 Remove some warnings.
This commit was SVN r14196.
2007-04-02 19:11:06 +00:00
George Bosilca
cc65814969 And set the message size before the first use too.
This commit was SVN r14159.
2007-03-28 18:01:13 +00:00
George Bosilca
b540545fa7 Set the communicator size before using it.
This commit was SVN r14158.
2007-03-28 17:59:21 +00:00
Jelena Pjesivac-Grbovic
d6402b6898 Adding in-order binary tree algorithm for non-commutative reduce operations.
I tested algorithm with intel and ibm tests and it passed again - so it should work.

This commit was SVN r14068.
2007-03-19 21:03:57 +00:00
Josh Hursey
dadca7da88 Merging in the jjhursey-ft-cr-stable branch (r13912 : HEAD).
This merge adds Checkpoint/Restart support to Open MPI. The initial
frameworks and components support a LAM/MPI-like implementation.

This commit follows the risk assessment presented to the Open MPI core
development group on Feb. 22, 2007.

This commit closes trac:158

More details to follow.

This commit was SVN r14051.

The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
  r13912

The following Trac tickets were found above:
  Ticket 158 --> https://svn.open-mpi.org/trac/ompi/ticket/158
2007-03-16 23:11:45 +00:00
Rolf vandeVaart
42168575fd Fix for the special case where np=2 and the sendbuf is set to MPI_IN_PLACE.
In that case, sendcount and sendtype are not valid and we need to use
recvcount and recvtype.

This commit fixes trac:943.  Reviewed by Jelena Pjesivac-Grbovic.

This commit was SVN r14022.

The following Trac tickets were found above:
  Ticket 943 --> https://svn.open-mpi.org/trac/ompi/ticket/943
2007-03-13 19:01:20 +00:00
Jelena Pjesivac-Grbovic
9780a000ba Cleanup of generic reduce function and possible (low probability) bug fix.
- fixing line lengths and some of the comments
- possible bug fix (but I do not think we exposed it in any tests so far)
  temporary buffers were allocated as multiples of extent instead of 
  true_extent + (count -1) * extent.
Everything is still passing Intel tests over tcp and btl mx up to 64 nodes.

This commit was SVN r13956.
2007-03-08 00:54:52 +00:00
Jelena Pjesivac-Grbovic
57cbafafd5 Clean up of generic broadcast function: removing unecessary statements and improving comments.
This commit was SVN r13955.
2007-03-07 21:59:53 +00:00
Jelena Pjesivac-Grbovic
0c07654c30 Updating reduce_scatter decision function based on MX results up to 64 nodes and both 1ppn and 2ppn
configurations.

This commit was SVN r13945.
2007-03-07 00:38:33 +00:00
Jelena Pjesivac-Grbovic
e5ed167a6e Adding tuned version of reduce_scatter implementation.
Currently 3 algorithms are available:
- non-overlapping, reduce + scatterv, (works for non-commutative operations)
- recursive halving algorithm (copied from basic module)
- ring algorithm  (similar to allreduce ring, for large messages)

This commit was SVN r13929.
2007-03-05 20:40:39 +00:00
Li-Ta Lo
196e2a86bb addes binomial tree based scatter, passed IBM and intel tests
This commit was SVN r13906.
2007-03-02 23:19:02 +00:00
Li-Ta Lo
11c94cbe76 eliminated the use of MPI_Get_count
This commit was SVN r13904.
2007-03-02 22:57:50 +00:00
Li-Ta Lo
3765e19d15 added ASCII graph for the topologies
This commit was SVN r13892.
2007-03-02 17:17:14 +00:00
Li-Ta Lo
bd75f2f162 change ALLGATHER to GATHER
This commit was SVN r13891.
2007-03-02 17:02:29 +00:00
Li-Ta Lo
c5d8c221b0 added binomial tree based Gather alogrithm, passed IBM and Intel tests
This commit was SVN r13835.
2007-02-28 01:11:01 +00:00
Jelena Pjesivac-Grbovic
627533fe4a Adding segmented ring algorithm for Allreduce for commutative operations.
Algorithm allows user to specify the segment size to be used for computation/communication overlap.
The additional memory requirement for the algorithm is 2 x segment size.
It performed well for (really) large message sizes over MX and it passed intel Allreduce_c and Allreduce_loc_c tests.

This commit was SVN r13832.
2007-02-27 20:32:30 +00:00
George Bosilca
bec20422ee Remove the warnings about printf data-type mismatch.
This commit was SVN r13804.
2007-02-26 22:20:35 +00:00
Bill D'Amico
db1c2a58c4 Removed cruft - unused variables causing warnings during OMPI build.
This commit was SVN r13772.
2007-02-23 18:55:41 +00:00
Li-Ta Lo
049921a5ec the temporary buffer is not needed for the MPI_IN_PLACE cases if the underlying Gather is implemented correctly
This commit was SVN r13740.
2007-02-21 20:39:56 +00:00
Jelena Pjesivac-Grbovic
36156f39c2 Modification to allreduce ring algorithm:
- the block sizes are computed in more uniformn way.
  The first k blocks may be 1 element larger than the remaining blocks.
The algorithm passed Intel Allreduce_c and Allreduce_loc_c tests, and 
IMB-3.2 Allreduce, over TCP and both btl and mtl MX (up to 128 processes).
The algorithm still only supports commutative operations.

This commit was SVN r13738.
2007-02-21 19:30:08 +00:00
Jelena Pjesivac-Grbovic
b608887466 Adding variant of linear alltoall algorithm where the number of
outstanding requests can be limited using mca parameters.
The implementation passed Intel, IMB-3.2, and mpi_test_suite tests over
TCP and MX up to 128 processes (64 nodes), on both 32-bit and 64-bit machines.
It is not activated by default, but it should be useful for really large
communicator sizes.

This commit was SVN r13720.
2007-02-20 04:25:00 +00:00
Jelena Pjesivac-Grbovic
d2d02642ca Removing compilation warnings about the output format.
This commit was SVN r13693.
2007-02-16 23:32:47 +00:00
Jelena Pjesivac-Grbovic
e532b928af Adding segmented binary reduce algorithm which works with non-commutative operations.
Implementation passed intel: MPI_Reduce_c , MPI_Reduce_loc_c, and MPI_Reduce_user_c tests
over TCP, BTL MX, and MTL MX, as well as, mpi_test_suite Reduce tests (up to 64 nodes).

The algorithm is still not activated by decision function (will be in the near future).

This commit was SVN r13657.
2007-02-14 22:38:38 +00:00