1
1

287 Коммитов

Автор SHA1 Сообщение Дата
Joseph Schuchart
a15e5dc7f0 COLL TUNED: remove stray selection of linear algs for alreduce and allgather
These selections seem harmful in my measurements and don't seem to be
motivated by previous measurement data.

Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
2020-11-11 18:40:24 +01:00
Joseph Schuchart
22e289b742 coll/tuned: fix minor errors in comments
Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
2020-11-05 18:32:47 +01:00
Joseph Schuchart
04d198fc9f coll/tuned: don't select algorithms knowing when it's clear they would fall back to linear
Bcast: scatter_allgather and scatter_allgather_ring expect N_elem >= N_procs
Allreduce: rabenseifner expects N_elem >= pow2 nearest to N_procs

In all cases, the implementations will fall back to a linear implementation,
which will most likely yield the worst performance (noted for 4B bcast on 128 ranks)

Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
2020-11-05 18:32:12 +01:00
Joseph Schuchart
7261255b8d coll/tuned: Mark global static algorithm as const
Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
2020-11-05 18:25:59 +01:00
Joseph Schuchart
06f605c1e1 coll/tuned: add hint about dynamic rules to mca parameters
The mca parameters coll_tuned_*_algorithm are ignored unless coll_tuned_use_dynamic_rules is true so mention that in the description.

Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
2020-11-05 18:20:24 +01:00
George Bosilca
16b49dc5b3 A complete overhaul of the HAN code.
Among many other things:
- Fix an imbalance bug in MPI_allgather
- Accept more human readable configuration files. We can now specify
  the collective by name instead of a magic number, and the component
  we want to use also by name.
- Add the capability to have optional arguments in the collective
  communication configuration file. Right now the capability exists
  for segment lengths, but is yet to be connected with the algorithms.
- Redo the initialization of all HAN collectives.

Cleanup the fallback collective support.
- In case the module is unable to deliver the expected result, it will fallback
  executing the collective operation on another collective component. This change
  make the support for this fallback simpler to use.
- Implement a fallback allowing a HAN module to remove itself as
  potential active collective module, and instead fallback to the
  next module in line.
- Completely disable the HAN modules on error. From the moment an error is
  encountered they remove themselves from the communicator, and in case some
  other modules calls them simply behave as a pass-through.

Communicator: provide ompi_comm_split_with_info to split and provide info at the same time
Add ompi_comm_coll_preference info key to control collective component selection

COLL HAN: use info keys instead of component-level variable to communicate topology level between abstraction layers
- The info value is a comma-separated list of entries, which are chosen with
  decreasing priorities. This overrides the priority of the component,
  unless the component has disqualified itself.
  An entry prefixed with ^ starts the ignore-list. Any entry following this
  character will be ingnored during the collective component selection for the
  communicator.
  Example: "sm,libnbc,^han,adapt" gives sm the highest preference, followed
  by libnbc. The components han and adapt are ignored in the selection process.
- Allocate a temporary buffer for all lower-level leaders (length 2 segments)
- Fix the handling of MPI_IN_PLACE for gather and scatter.

COLL HAN: Fix topology handling
 - HAN should not rely on node names to determine the ordering of ranks.
   Instead, use the node leaders as identifiers and short-cut if the
   node-leaders agree that ranks are consecutive. Also, error out if
   the rank distribution is imbalanced for now.

Signed-off-by: Xi Luo <xluo12@vols.utk.edu>
Signed-off-by: Joseph Schuchart <schuchart@icl.utk.edu>
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
2020-10-25 18:13:16 -04:00
bsergentm
220b997a58 Coll/han Bull
* first import of Bull specific modifications to HAN

* Cleaning, renaming and compilation fixing Changed all future into han.

* Import BULL specific modifications in coll/tuned and coll/base

* Fixed compilation issues in Han

* Changed han_output to directly point to coll framework output.

* The verbosity MCA parameter was removed as a duplicated of coll verbosity

* Add fallback in han reduce when op cannot commute and ppn are imbalanced

* Added fallback wfor han bcast when nodes do not have the same number of process

* Add fallback in han scatter when ppn are imbalanced

+ fixed missing scatter_fn pointer in the module interface

Signed-off-by: Brelle Emmanuel <emmanuel.brelle@atos.net>
Co-authored-by: a700850 <pierre.lemarinier@atos.net>
Co-authored-by: germainf <florent.germain@atos.net>
2020-10-09 14:17:46 -04:00
Jeff Squyres
560ebc5780
Merge pull request #7716 from bosilca/coll/adapt
ADAPT: Event-driven collective implementation
2020-09-01 11:29:53 -04:00
William Zhang
57b95bcb45 coll/tuned: Revert RSB and RS default algorithms
Reduce scatter block and reduce scatter algorithms were hitting
correctness issues for non commutative strided tests. We will revert to
the original default algorithms for those two collectives (basic linear
and non overlapping respectively) in the non commutative op case.

See #8010

Signed-off-by: William Zhang <wilzhang@amazon.com>
2020-08-25 08:44:24 -07:00
George Bosilca
d71264569e Fix the atomic management of the bcast and reduce freelist
API consistent with other collective modules
Add comments
Other minor cleanups.

Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
2020-08-24 12:13:38 -07:00
William Zhang
ce40cfbaa5 coll/tuned: Change the default collective algorithm selection
The default algorithm selections were out of date and not performing
well. After gathering data from OMPI developers, new default algorithm
decisions were selected for:

    allgather
    allgatherv
    allreduce
    alltoall
    alltoallv
    barrier
    bcast
    gather
    reduce
    reduce_scatter_block
    reduce_scatter
    scatter

These results were gathered using the ompi-collectives-tuning package
and then averaged amongst the results gathered from multiple OMPI
developers on their clusters.

You can access the graphs and averaged data here:
https://drive.google.com/drive/folders/1MV5E9gN-5tootoWoh62aoXmN0jiWiqh3

Signed-off-by: William Zhang <wilzhang@amazon.com>
2020-07-28 10:41:48 -07:00
William Zhang
50823fe9a9 coll/tuned: Fix dynamic message size for gather and scatter
The gather and scatter operations did not use the correct message size
(Only did datatype size * com size). This did not correctly reflect the
total message size and prevents fine tuning within a com size. This
patch multiplies the value by the number of elements sent.

Signed-off-by: William Zhang <wilzhang@amazon.com>
2020-05-14 12:17:52 -07:00
William Zhang
771f9c011d coll/tuned: Add NULL check to prevent segfault
Signed-off-by: William Zhang <wilzhang@amazon.com>

cr https://code.amazon.com/reviews/CR-23837553
2020-04-21 17:53:46 +00:00
William Zhang
50640402ab coll/tuned: Fix typos
Signed-off-by: William Zhang <wilzhang@amazon.com>
2020-04-21 17:39:37 +00:00
Austen Lauria
b65ec27307 Fix some compiler warnings.
Silence unused variables, incompatible pointer types,
un-initialized variables, and signed/unsigned comparisons.

Signed-off-by: Austen Lauria <awlauria@us.ibm.com>
2020-01-10 13:10:53 -05:00
Mikhail Brinskii
f2cbd4806e COLL/TUNED: Add linear scatter using isend for mlnx platform
Signed-off-by: Mikhail Brinskii <mikhailb@mellanox.com>
2019-11-07 11:04:39 +02:00
Mikhail Brinskii
65618f8db8 COLL/TUNED: Minor var names/comments fixes
Signed-off-by: Mikhail Brinskii <mikhailb@mellanox.com>
2019-07-24 10:23:38 +00:00
Mikhail Brinskii
404c480068 COLL/TUNED: Update alltoall selection rule for mlx
Use linear with sync alltoall algorithm for certain message/comm size
ranges. Does not affect default fixed decision, unless HPCX (with its
custom parameters) is used or corresponding mca is set.

Signed-off-by: Mikhail Brinskii <mikhailb@mellanox.com>
2019-07-13 23:27:40 +03:00
Aravind Gopalakrishnan
88d781056f coll/tuned: Fix MPI_IN_PLACE processing in tuned algorithms
PR #5450 addresses MPI_IN_PLACE processing for basic collective algorithms.
But in conjunction with that, we need to check for MPI_IN_PLACE in tuned paths
as well before calling ompi_datatype_type_size() as otherwise we segfault.

MPI spec also stipulates to ignore sendcount and sendtype for Alltoall and
Allgatherv operations. So, extending the check to these algorithms as well.

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@intel.com>
2018-10-24 15:31:33 -07:00
Mikhail Kurnosov
ba83cc91eb coll/base: add MPI_Bcast based on a binomial tree scatter followed by a ring allgather
Implements MPI_Bcast using a binomial tree scatter followed by a ring allgather.

Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-07-16 08:56:09 -06:00
Mikhail Kurnosov
c500739293 coll/base: Add MPI_Bcast based on a scatter followed by an allgather
Implements MPI_Bcast using a binomial tree scatter followed by
an recursive doubling allgather.

Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-06-21 11:47:07 -06:00
Mikhail Kurnosov
6547b58316 coll/base: add knomial tree algorithm for MPI_Bcast
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-06-19 13:01:26 -06:00
Mikhail Kurnosov
3adf96fdb8 coll/base: add butterfly algorithm for MPI_Reduce_scatter
Implements butterfly algorithm for MPI_Reduce_scatter.
The algorithm can be used both by commutative and non-commutative operations, for power-of-two and non-power-of-two number of processes.

Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-06-05 15:53:13 +07:00
Mikhail Kurnosov
28d5837dd9 coll: reduce_scatter_block: add butterfly algorithm
Implements butterfly algorithm for MPI_Reduce_scatter_block.
The algorithm can be used both by commutative and non-commutative
operations, for power-of-two and non-power-of-two number of processes.

Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-05-27 14:17:41 +07:00
bosilca
d13b9a2e25
Merge pull request #5156 from ggouaillardet/topic/reduce_scatter_block
coll: reduce_scatter_block: rename and MCA parameter description fix
2018-05-15 11:13:26 -04:00
Mikhail Kurnosov
82299a9c04 coll: reduce_scatter_block: add recursive halving algorithm
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-05-15 08:20:32 +07:00
Gilles Gouaillardet
ce7b3113f6 coll: reduce_scatter_block: rename and MCA parameter description fix
- rename ompi_coll_base_reduce_scatter_block_basic to
   more self descriptive ompi_coll_base_reduce_scatter_block_basic_linear
 - fix the description of the coll_tuned_reduce_scatter_block_algorithm
   MCA param

this fixes and documents previous open-mpi/ompi@0e8b35b615

MPI_Reduce_scatter_block used to be implemented by the coll/basic module only.
A new algo (recursive doubling) was recently introduced and can be used via the coll/tuned module,
but we never intended to make it the default algo.
In order to "restore" the previous default, the initial algo was moved from coll/basic to coll/base,
and is now used by default by coll/tuned.

Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2018-05-09 08:54:48 +09:00
Gilles Gouaillardet
0e8b35b615 coll/tuned: use basic algo for reduce_scatter_block by default
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2018-05-07 16:11:44 +09:00
Mikhail Kurnosov
8cf8553abd Resolve merge conflicts
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-05-03 07:28:32 +07:00
Mikhail Kurnosov
4cbcff7fcd coll/base: add recursive doubling algorithm for MPI_Reduce_scatter_block
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-04-23 11:02:31 +07:00
Mikhail Kurnosov
82a3a5bdb5 Fix dynamic decision for Scan and bug in Allreduce
Signed-off-by: Mikhail Kurnosov <mkurnosov@gmail.com>
2018-04-06 11:03:17 +07:00
Gilles Gouaillardet
e85fa469f3 coll/tuned: add recursive doubling algo for [ex]scan
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2018-04-04 14:56:23 +09:00
Gilles Gouaillardet
65fa0b59c3 coll/tuned: add Rabenseifner algo for [all]reduce
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2018-04-04 13:25:41 +09:00
Joshua Hursey
e1d079544b mca: Dynamic components link against project lib
* Resolves #3705
 * Components should link against the project level library to better
   support `dlopen` with `RTLD_LOCAL`.
 * Extend the `mca_FRAMEWORK_COMPONENT_la_LIBADD` in the `Makefile.am`
   with the appropriate project level library:
```
MCA components in ompi/
       $(top_builddir)/ompi/lib@OMPI_LIBMPI_NAME@.la
MCA components in orte/
       $(top_builddir)/orte/lib@ORTE_LIB_PREFIX@open-rte.la
MCA components in opal/
       $(top_builddir)/opal/lib@OPAL_LIB_PREFIX@open-pal.la
MCA components in oshmem/
       $(top_builddir)/oshmem/liboshmem.la"
```

Note: The changes in this commit were automated by the script in
the commit that proceeds it with the `libadd_mca_comp_update.py`
script. Some components were not included in this change because
they are statically built only.

Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
2017-08-24 11:56:16 -04:00
Gilles Gouaillardet
5dfd4ab6ca coll/tuned: remove set-but-not-used variables
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2017-04-04 13:18:11 +09:00
Gilles Gouaillardet
e879d2910a coll/tuned: make coll_tuned_gather_algorithms MCA settable
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
2017-02-02 11:00:38 +09:00
bosilca
c331e6794c Allow all tuned MCA parameters to be modified programatically. (#2829)
Fix a comment in the MCA header.

Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
2017-01-31 21:47:36 -05:00
Ralph Castain
585540bcee Reduce the flood of warnings due to uninitialized variables, mismatched types, and unused things to a more bearable trickle
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
2016-12-14 16:33:50 -08:00
Ralph Castain
1e2019ce2a Revert "Update to sync with OMPI master and cleanup to build"
This reverts commit cb55c88a8b7817d5891ff06a447ea190b0e77479.
2016-11-22 15:03:20 -08:00
Ralph Castain
cb55c88a8b Update to sync with OMPI master and cleanup to build
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
2016-11-22 14:24:54 -08:00
George Bosilca
028e747470 Do not alter ompi_coll_tuned_use_dynamic_rules.
This is set globally as an MCA parameter and should be never
altered based on a single communicator setting.
2016-10-25 12:17:25 -04:00
George Bosilca
253eb80e26 Code cleaning of the tuned module. 2016-10-25 12:17:25 -04:00
George Bosilca
d577e12dd0 Fix comment. 2016-06-03 00:57:31 +09:00
Gilles Gouaillardet
cebde2a753 coll/tuned: add missing #include "opal/util/output.h"
Thanks Marco Atzeri for contributing the original patch
2015-12-24 14:41:17 +09:00
George Bosilca
88492a1e12 Consistently use the request array for all modules (single array stored
in the base).
Correctly deal with persistent requests (they must be always freed when
they are stored in the request array associated with the communicator).
Always use MPI_STATUS_IGNORE for single request waiting functions.
2015-10-08 12:00:41 -04:00
Gilles Gouaillardet
de8de65b07 coll/tuned: remove unused prototypes from coll_tuned.h 2015-10-06 09:07:48 +09:00
Gilles Gouaillardet
e01bac962f coll: do not cast way the const modifier when this is not necessary
update the coll framework and mpi c bindings
2015-09-09 09:18:57 +09:00
Ralph Castain
869041f770 Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
Gilles Gouaillardet
9d56b85b55 initialize common symbols from ompi 2015-05-08 10:11:58 +09:00
Nathan Hjelm
df75d0382f ompi: use C99 subobject naming for component initialization
This commit helps future-proof ompi components by initializing each
component member by name.

Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
2015-04-18 10:29:58 -06:00