2014-07-10 20:31:15 +04:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2005-06-30 09:50:55 +04:00
|
|
|
/*
|
2007-03-17 02:11:45 +03:00
|
|
|
* Copyright (c) 2004-2007 The Trustees of Indiana University and Indiana
|
2005-11-05 22:57:48 +03:00
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2008-02-18 20:39:30 +03:00
|
|
|
* Copyright (c) 2004-2008 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2005-06-30 09:50:55 +04:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
2014-07-10 20:31:15 +04:00
|
|
|
* Copyright (c) 2006-2014 Los Alamos National Security, LLC. All rights
|
2007-07-25 21:26:23 +04:00
|
|
|
* reserved.
|
2010-07-13 15:30:43 +04:00
|
|
|
* Copyright (c) 2010 Oracle and/or its affiliates. All rights reserved.
|
2013-11-13 17:22:39 +04:00
|
|
|
* Copyright (c) 2012-2013 NVIDIA Corporation. All rights reserved.
|
2005-06-30 09:50:55 +04:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
/**
|
|
|
|
* @file
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* Byte Transfer Layer (BTL)
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* BTL Initialization:
|
|
|
|
*
|
|
|
|
* During library initialization, all available BTL components are
|
|
|
|
* loaded and opened via their mca_base_open_component_fn_t
|
|
|
|
* function. The BTL open function should register any mca parameters
|
2013-03-28 01:09:41 +04:00
|
|
|
* used to tune/adjust the behaviour of the BTL (mca_base_var_register()
|
|
|
|
* mca_base_component_var_register()). Note that the open function may fail
|
2005-06-30 09:50:55 +04:00
|
|
|
* if the resources (e.g. shared libraries, etc) required by the network
|
|
|
|
* transport are not available.
|
|
|
|
*
|
|
|
|
* The mca_btl_base_component_init_fn_t() is then called for each of the
|
|
|
|
* components that are succesfully opened. The component init function may
|
|
|
|
* return either:
|
|
|
|
*
|
|
|
|
* (1) a NULL list of BTL modules if the transport is not available,
|
2007-07-25 21:26:23 +04:00
|
|
|
* (2) a list containing a one or more single BTL modules, where the BTL provides
|
|
|
|
* a layer of abstraction over one or more physical devices (e.g. NICs),
|
2006-09-19 11:55:41 +04:00
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
* During module initialization, the module should post any addressing
|
|
|
|
* information required by its peers. An example would be the TCP
|
|
|
|
* listen port opened by the TCP module for incoming connection
|
|
|
|
* requests. This information is published to peers via the
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* modex_send() interface. Note that peer information is not
|
|
|
|
* guaranteed to be available via modex_recv() during the
|
2006-09-19 11:55:41 +04:00
|
|
|
* module's init function. However, it will be available during
|
2005-06-30 09:50:55 +04:00
|
|
|
* BTL selection (mca_btl_base_add_proc_fn_t()).
|
|
|
|
*
|
|
|
|
* BTL Selection:
|
|
|
|
*
|
2006-09-19 11:55:41 +04:00
|
|
|
* The upper layer builds an ordered list of the available BTL modules sorted
|
2005-06-30 09:50:55 +04:00
|
|
|
* by their exclusivity ranking. This is a relative ranking that is used
|
2006-09-19 11:55:41 +04:00
|
|
|
* to determine the set of BTLs that may be used to reach a given destination.
|
|
|
|
* During startup the BTL modules are queried via their
|
2005-06-30 09:50:55 +04:00
|
|
|
* mca_btl_base_add_proc_fn_t() to determine if they are able to reach
|
2006-09-19 11:55:41 +04:00
|
|
|
* a given destination. The BTL module with the highest ranking that
|
|
|
|
* returns success is selected. Subsequent BTL modules are selected only
|
2005-06-30 09:50:55 +04:00
|
|
|
* if they have the same exclusivity ranking.
|
2006-09-19 11:55:41 +04:00
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
* An example of how this might be used:
|
|
|
|
*
|
|
|
|
* BTL Exclusivity Comments
|
|
|
|
* -------- ----------- ------------------
|
|
|
|
* LO 100 Selected exclusively for local process
|
|
|
|
* SM 50 Selected exclusively for other processes on host
|
|
|
|
* IB 0 Selected based on network reachability
|
|
|
|
* IB 0 Selected based on network reachability
|
|
|
|
* TCP 0 Selected based on network reachability
|
|
|
|
* TCP 0 Selected based on network reachability
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* When mca_btl_base_add_proc_fn_t() is called on a BTL module, the BTL
|
|
|
|
* will populate an OUT variable with mca_btl_base_endpoint_t pointers.
|
|
|
|
* Each pointer is treated as an opaque handle by the upper layer and is
|
2006-09-19 11:55:41 +04:00
|
|
|
* returned to the BTL on subsequent data transfer calls to the
|
|
|
|
* corresponding destination process. The actual contents of the
|
|
|
|
* data structure are defined on a per BTL basis, and may be used to
|
|
|
|
* cache addressing or connection information, such as a TCP socket
|
2005-06-30 09:50:55 +04:00
|
|
|
* or IB queue pair.
|
|
|
|
*
|
|
|
|
* Progress:
|
|
|
|
*
|
|
|
|
* By default, the library provides for polling based progress of outstanding
|
2006-09-19 11:55:41 +04:00
|
|
|
* requests. The BTL component exports an interface function (btl_progress)
|
2005-06-30 09:50:55 +04:00
|
|
|
* that is called in a polling mode by the PML during calls into the MPI
|
2007-07-25 21:26:23 +04:00
|
|
|
* library. Note that the btl_progress() function is called on the BTL component
|
2005-06-30 09:50:55 +04:00
|
|
|
* rather than each BTL module. This implies that the BTL author is responsible
|
2006-09-19 11:55:41 +04:00
|
|
|
* for iterating over the pending operations in each of the BTL modules associated
|
2005-06-30 09:50:55 +04:00
|
|
|
* with the component.
|
2006-09-19 11:55:41 +04:00
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
* On platforms where threading support is provided, the library provides the
|
2006-09-19 11:55:41 +04:00
|
|
|
* option of building with asynchronous threaded progress. In this case, the BTL
|
2005-06-30 09:50:55 +04:00
|
|
|
* author is responsible for providing a thread to progress pending operations.
|
2006-09-19 11:55:41 +04:00
|
|
|
* A thread is associated with the BTL component/module such that transport specific
|
|
|
|
* functionality/APIs may be used to block the thread until a pending operation
|
|
|
|
* completes. This thread MUST NOT poll for completion as this would oversubscribe
|
|
|
|
* the CPU.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
* Note that in the threaded case the PML may choose to use a hybrid approach,
|
|
|
|
* such that polling is implemented from the user thread for a fixed number of
|
2006-09-19 11:55:41 +04:00
|
|
|
* cycles before relying on the background thread(s) to complete requests. If
|
|
|
|
* possible the BTL should support the use of both modes concurrently.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#ifndef OPAL_MCA_BTL_H
|
|
|
|
#define OPAL_MCA_BTL_H
|
2005-06-30 09:50:55 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
|
|
|
#include "opal/types.h"
|
|
|
|
#include "opal/prefetch.h" /* For OPAL_LIKELY */
|
2009-03-04 01:25:13 +03:00
|
|
|
#include "opal/class/opal_bitmap.h"
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
#include "opal/datatype/opal_convertor.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/mca.h"
|
|
|
|
#include "opal/mca/mpool/mpool.h"
|
2007-03-17 02:11:45 +03:00
|
|
|
#include "opal/mca/crs/crs.h"
|
|
|
|
#include "opal/mca/crs/base/base.h"
|
|
|
|
|
2008-05-30 07:58:39 +04:00
|
|
|
BEGIN_C_DECLS
|
2006-08-24 20:38:08 +04:00
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/*
|
|
|
|
* BTL types
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct mca_btl_base_module_t;
|
|
|
|
struct mca_btl_base_endpoint_t;
|
|
|
|
struct mca_btl_base_descriptor_t;
|
|
|
|
struct mca_mpool_base_resources_t;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t;
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/**
|
|
|
|
* Opaque registration handle for executing RDMA and atomic
|
|
|
|
* operations on a memory region.
|
|
|
|
*
|
|
|
|
* This data inside this handle is appropriate for passing
|
|
|
|
* to remote peers to execute RDMA and atomic operations. The
|
|
|
|
* size needed to send the registration handle can be
|
|
|
|
* obtained from the btl via the btl_registration_handle_size
|
|
|
|
* member. If this size is 0 then no registration data is
|
|
|
|
* needed to execute RDMA or atomic operations.
|
|
|
|
*/
|
|
|
|
struct mca_btl_base_registration_handle_t;
|
|
|
|
typedef struct mca_btl_base_registration_handle_t mca_btl_base_registration_handle_t;
|
|
|
|
|
|
|
|
|
|
|
|
/* Wildcard endpoint for use in the register_mem function */
|
|
|
|
#define MCA_BTL_ENDPOINT_ANY (struct mca_btl_base_endpoint_t *) -1
|
2005-06-30 09:50:55 +04:00
|
|
|
|
|
|
|
/* send/recv operations require tag matching */
|
|
|
|
typedef uint8_t mca_btl_base_tag_t;
|
|
|
|
|
2007-05-24 23:51:26 +04:00
|
|
|
#define MCA_BTL_NO_ORDER 255
|
|
|
|
|
2008-01-15 08:32:53 +03:00
|
|
|
/*
|
|
|
|
* Communication specific defines. There are a number of active message ID
|
|
|
|
* that can be shred between all frameworks that need to communicate (i.e.
|
|
|
|
* use the PML or the BTL directly). These ID are exchanged between the
|
|
|
|
* processes, therefore they need to be identical everywhere. The simplest
|
|
|
|
* approach is to have them defined as constants, and give each framework a
|
|
|
|
* small number. Here is the rule that defines these ID (they are 8 bits):
|
|
|
|
* - the first 3 bits are used to code the framework (i.e. PML, OSC, COLL)
|
|
|
|
* - the remaining 5 bytes are used internally by the framework, and divided
|
|
|
|
* based on the components requirements. Therefore, the way the PML and
|
|
|
|
* the OSC frameworks use these defines will be different. For more
|
|
|
|
* information about how these framework ID are defined, take a look in the
|
|
|
|
* header file associated with the framework.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_AM_FRAMEWORK_MASK 0xD0
|
|
|
|
#define MCA_BTL_TAG_BTL 0x20
|
|
|
|
#define MCA_BTL_TAG_PML 0x40
|
|
|
|
#define MCA_BTL_TAG_OSC_RDMA 0x60
|
|
|
|
#define MCA_BTL_TAG_USR 0x80
|
|
|
|
#define MCA_BTL_TAG_MAX 255 /* 1 + highest allowed tag num */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reserved tags for specific BTLs. As multiple BTLs can be active
|
|
|
|
* simultaneously, their tags should not collide.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_TAG_IB (MCA_BTL_TAG_BTL + 0)
|
|
|
|
#define MCA_BTL_TAG_UDAPL (MCA_BTL_TAG_BTL + 1)
|
2013-08-22 01:00:09 +04:00
|
|
|
#define MCA_BTL_TAG_SMCUDA (MCA_BTL_TAG_BTL + 2)
|
2005-06-30 09:50:55 +04:00
|
|
|
|
|
|
|
/* prefered protocol */
|
2007-07-12 01:21:40 +04:00
|
|
|
#define MCA_BTL_FLAGS_SEND 0x0001
|
|
|
|
#define MCA_BTL_FLAGS_PUT 0x0002
|
|
|
|
#define MCA_BTL_FLAGS_GET 0x0004
|
2014-11-19 21:22:46 +03:00
|
|
|
/* btls that set the MCA_BTL_FLAGS_RDMA will always get added to the BML
|
|
|
|
* rdma_btls list. This allows the updated one-sided component to
|
|
|
|
* use btls that are not otherwise used for send/recv. */
|
2005-09-13 01:38:31 +04:00
|
|
|
#define MCA_BTL_FLAGS_RDMA (MCA_BTL_FLAGS_GET|MCA_BTL_FLAGS_PUT)
|
2005-08-12 20:56:46 +04:00
|
|
|
|
|
|
|
/* btl can send directly from user buffer w/out registration */
|
2007-07-12 01:21:40 +04:00
|
|
|
#define MCA_BTL_FLAGS_SEND_INPLACE 0x0008
|
2005-08-12 20:56:46 +04:00
|
|
|
|
2007-07-25 21:26:23 +04:00
|
|
|
/* btl transport reliability flags - currently used only by the DR PML */
|
2007-07-12 01:21:40 +04:00
|
|
|
#define MCA_BTL_FLAGS_NEED_ACK 0x0010
|
|
|
|
#define MCA_BTL_FLAGS_NEED_CSUM 0x0020
|
|
|
|
|
|
|
|
/** RDMA put/get calls must have a matching prepare_{src,dst} call
|
|
|
|
on the target with the same base (and possibly bound). */
|
|
|
|
#define MCA_BTL_FLAGS_RDMA_MATCHED 0x0040
|
2005-09-13 01:38:31 +04:00
|
|
|
|
2007-05-24 23:51:26 +04:00
|
|
|
/* btl needs local rdma completion */
|
2007-07-12 01:21:40 +04:00
|
|
|
#define MCA_BTL_FLAGS_RDMA_COMPLETION 0x0080
|
2007-05-24 23:51:26 +04:00
|
|
|
|
2007-08-29 01:23:44 +04:00
|
|
|
/* btl can do heterogeneous rdma operations on byte buffers */
|
|
|
|
#define MCA_BTL_FLAGS_HETEROGENEOUS_RDMA 0x0100
|
|
|
|
|
2010-07-13 15:30:43 +04:00
|
|
|
/* btl can support failover if enabled */
|
|
|
|
#define MCA_BTL_FLAGS_FAILOVER_SUPPORT 0x0200
|
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
#define MCA_BTL_FLAGS_CUDA_PUT 0x0400
|
|
|
|
#define MCA_BTL_FLAGS_CUDA_GET 0x0800
|
|
|
|
#define MCA_BTL_FLAGS_CUDA_RDMA (MCA_BTL_FLAGS_CUDA_GET|MCA_BTL_FLAGS_CUDA_PUT)
|
2013-01-18 02:34:43 +04:00
|
|
|
#define MCA_BTL_FLAGS_CUDA_COPY_ASYNC_SEND 0x1000
|
|
|
|
#define MCA_BTL_FLAGS_CUDA_COPY_ASYNC_RECV 0x2000
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2013-06-12 01:52:20 +04:00
|
|
|
/* btl can support signaled operations. BTLs that support this flag are
|
|
|
|
* expected to provide a mechanism for asynchronous progress on descriptors
|
|
|
|
* where the feature is requested. BTLs should also be aware that users can
|
|
|
|
* (and probably will) turn this flag on and off using the MCA variable
|
|
|
|
* system.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_FLAGS_SIGNALED 0x4000
|
|
|
|
|
2014-11-02 23:56:14 +03:00
|
|
|
|
|
|
|
/** The BTL supports network atomic operations */
|
2014-11-05 18:32:23 +03:00
|
|
|
#define MCA_BTL_FLAGS_ATOMIC_OPS 0x08000
|
|
|
|
/** The BTL supports fetching network atomic operations */
|
|
|
|
#define MCA_BTL_FLAGS_ATOMIC_FOPS 0x10000
|
2014-11-02 23:56:14 +03:00
|
|
|
|
2005-08-12 20:56:46 +04:00
|
|
|
/* Default exclusivity levels */
|
2007-07-12 01:21:40 +04:00
|
|
|
#define MCA_BTL_EXCLUSIVITY_HIGH (64*1024) /* internal loopback */
|
2005-08-12 20:56:46 +04:00
|
|
|
#define MCA_BTL_EXCLUSIVITY_DEFAULT 1024 /* GM/IB/etc. */
|
|
|
|
#define MCA_BTL_EXCLUSIVITY_LOW 0 /* TCP used as a last resort */
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2006-08-17 00:21:38 +04:00
|
|
|
/* error callback flags */
|
|
|
|
#define MCA_BTL_ERROR_FLAGS_FATAL 0x1
|
2010-07-13 15:30:43 +04:00
|
|
|
#define MCA_BTL_ERROR_FLAGS_NONFATAL 0x2
|
2013-08-22 01:00:09 +04:00
|
|
|
#define MCA_BTL_ERROR_FLAGS_ADD_CUDA_IPC 0x4
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/** registration flags */
|
|
|
|
enum {
|
|
|
|
/** Allow local write on the registered region. If a region is registered
|
|
|
|
* with this flag the registration can be used as the local handle for a
|
|
|
|
* btl_get operation. */
|
2014-10-31 01:43:41 +03:00
|
|
|
MCA_BTL_REG_FLAG_LOCAL_WRITE = 0x00000001,
|
2014-10-07 19:25:56 +04:00
|
|
|
/** Allow remote read on the registered region. If a region is registered
|
|
|
|
* with this flag the registration can be used as the remote handle for a
|
|
|
|
* btl_get operation. */
|
2014-10-31 01:43:41 +03:00
|
|
|
MCA_BTL_REG_FLAG_REMOTE_READ = 0x00000002,
|
2014-10-07 19:25:56 +04:00
|
|
|
/** Allow remote write on the registered region. If a region is registered
|
|
|
|
* with this flag the registration can be used as the remote handle for a
|
|
|
|
* btl_put operation. */
|
2014-10-31 01:43:41 +03:00
|
|
|
MCA_BTL_REG_FLAG_REMOTE_WRITE = 0x00000004,
|
2014-10-07 19:25:56 +04:00
|
|
|
/** Allow remote atomic operations on the registered region. If a region is
|
|
|
|
* registered with this flag the registration can be used as the remote
|
|
|
|
* handle for a btl_atomic_op or btl_atomic_fop operation. */
|
2014-10-31 01:43:41 +03:00
|
|
|
MCA_BTL_REG_FLAG_REMOTE_ATOMIC = 0x00000008,
|
2014-10-07 19:25:56 +04:00
|
|
|
/** Allow any btl operation on the registered region. If a region is registered
|
|
|
|
* with this flag the registration can be used as the local or remote handle for
|
|
|
|
* any btl operation. */
|
2014-10-31 01:43:41 +03:00
|
|
|
MCA_BTL_REG_FLAG_ACCESS_ANY = 0x0000000f,
|
|
|
|
#if OPAL_CUDA_GDR_SUPPORT
|
|
|
|
/** Region is in GPU memory */
|
|
|
|
MCA_BTL_REG_FLAG_CUDA_GPU_MEM = 0x00010000,
|
|
|
|
#endif
|
2014-10-07 19:25:56 +04:00
|
|
|
};
|
|
|
|
|
2014-11-02 23:56:14 +03:00
|
|
|
/** supported atomic operations */
|
|
|
|
enum {
|
|
|
|
/** The btl supports atomic add */
|
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_ADD = 0x00000001,
|
|
|
|
/** The btl supports atomic bitwise and */
|
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_AND = 0x00000200,
|
|
|
|
/** The btl supports atomic bitwise or */
|
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_OR = 0x00000400,
|
|
|
|
/** The btl supports atomic bitwise exclusive or */
|
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_XOR = 0x00000800,
|
|
|
|
/** The btl supports atomic compare-and-swap */
|
2014-11-05 00:09:37 +03:00
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_CSWAP = 0x10000000,
|
2014-11-14 03:26:35 +03:00
|
|
|
/** The btl guarantees global atomicity (can mix btl atomics with cpu atomics) */
|
|
|
|
MCA_BTL_ATOMIC_SUPPORTS_GLOB = 0x20000000,
|
2014-11-02 23:56:14 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
enum mca_btl_base_atomic_op_t {
|
|
|
|
/** Atomic add: (*remote_address) = (*remote_address) + operand */
|
|
|
|
MCA_BTL_ATOMIC_ADD = 0x0001,
|
|
|
|
/** Atomic and: (*remote_address) = (*remote_address) & operand */
|
|
|
|
MCA_BTL_ATOMIC_AND = 0x0011,
|
|
|
|
/** Atomic or: (*remote_address) = (*remote_address) | operand */
|
|
|
|
MCA_BTL_ATOMIC_OR = 0x0012,
|
|
|
|
/** Atomic xor: (*remote_address) = (*remote_address) ^ operand */
|
|
|
|
MCA_BTL_ATOMIC_XOR = 0x0014,
|
|
|
|
};
|
|
|
|
typedef enum mca_btl_base_atomic_op_t mca_btl_base_atomic_op_t;
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* Asynchronous callback function on completion of an operation.
|
2007-07-25 21:26:23 +04:00
|
|
|
* Completion Semantics: The descriptor can be reused or returned to the
|
|
|
|
* BTL via mca_btl_base_module_free_fn_t. The operation has been queued to
|
|
|
|
* the network device or will otherwise make asynchronous progress without
|
|
|
|
* subsequent calls to btl_progress.
|
|
|
|
*
|
|
|
|
* @param[IN] module the BTL module
|
|
|
|
* @param[IN] endpoint the BTL endpoint
|
|
|
|
* @param[IN] descriptor the BTL descriptor
|
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
typedef void (*mca_btl_base_completion_fn_t)(
|
2007-07-25 21:26:23 +04:00
|
|
|
struct mca_btl_base_module_t* module,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct mca_btl_base_descriptor_t* descriptor,
|
2005-06-30 09:50:55 +04:00
|
|
|
int status);
|
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Asynchronous callback function on completion of an rdma or atomic operation.
|
|
|
|
* Completion Semantics: The rdma or atomic memory operation has completed
|
|
|
|
* remotely (i.e.) is remotely visible and the caller is free to deregister
|
|
|
|
* the local_handle or modify the memory in local_address.
|
|
|
|
*
|
|
|
|
* @param[IN] module the BTL module
|
|
|
|
* @param[IN] endpoint the BTL endpoint
|
|
|
|
* @param[IN] local_address local address for the operation (if any)
|
|
|
|
* @param[IN] local_handle local handle associated with the local_address
|
|
|
|
* @param[IN] context callback context supplied to the rdma/atomic operation
|
|
|
|
* @param[IN] cbdata callback data supplied to the rdma/atomic operation
|
|
|
|
* @param[IN] status status of the operation
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
typedef void (*mca_btl_base_rdma_completion_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* module,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
void *local_address,
|
|
|
|
struct mca_btl_base_registration_handle_t *local_handle,
|
|
|
|
void *context,
|
|
|
|
void *cbdata,
|
|
|
|
int status);
|
|
|
|
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* Describes a region/segment of memory that is addressable
|
|
|
|
* by an BTL.
|
2012-06-21 21:09:12 +04:00
|
|
|
*
|
|
|
|
* Note: In many cases the alloc and prepare methods of BTLs
|
|
|
|
* do not return a mca_btl_base_segment_t but instead return a
|
|
|
|
* subclass. Extreme care should be used when modifying
|
|
|
|
* BTL segments to prevent overwriting internal BTL data.
|
|
|
|
*
|
|
|
|
* All BTLs MUST use base segments when calling registered
|
|
|
|
* Callbacks.
|
|
|
|
*
|
|
|
|
* BTL MUST use mca_btl_base_segment_t or a subclass and
|
|
|
|
* MUST store their segment length in btl_seg_size. BTLs
|
2013-07-10 19:13:08 +04:00
|
|
|
* MUST specify a segment no larger than MCA_BTL_SEG_MAX_SIZE.
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
struct mca_btl_base_segment_t {
|
2007-07-25 21:26:23 +04:00
|
|
|
/** Address of the memory */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
opal_ptr_t seg_addr;
|
2007-07-25 21:26:23 +04:00
|
|
|
/** Length in bytes */
|
2012-06-21 21:09:12 +04:00
|
|
|
uint64_t seg_len;
|
2005-06-30 09:50:55 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_base_segment_t mca_btl_base_segment_t;
|
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* A descriptor that holds the parameters to a send/put/get
|
|
|
|
* operation along w/ a callback routine that is called on
|
|
|
|
* completion of the request.
|
2014-07-10 20:31:15 +04:00
|
|
|
* Note: receive callbacks will store the incomming data segments in
|
2014-10-30 20:40:43 +03:00
|
|
|
* des_segments
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
struct mca_btl_base_descriptor_t {
|
|
|
|
ompi_free_list_item_t super;
|
2014-10-30 20:40:43 +03:00
|
|
|
mca_btl_base_segment_t *des_segments; /**< local segments */
|
|
|
|
size_t des_segment_count; /**< number of local segments */
|
2007-07-25 21:26:23 +04:00
|
|
|
mca_btl_base_completion_fn_t des_cbfunc; /**< local callback function */
|
|
|
|
void* des_cbdata; /**< opaque callback data */
|
|
|
|
void* des_context; /**< more opaque callback data */
|
2007-12-09 17:08:01 +03:00
|
|
|
uint32_t des_flags; /**< hints to BTL */
|
2007-07-25 21:26:23 +04:00
|
|
|
/** order value, this is only
|
|
|
|
valid in the local completion callback
|
|
|
|
and may be used in subsequent calls to
|
|
|
|
btl_alloc, btl_prepare_src/dst to request
|
|
|
|
a descriptor that will be ordered w.r.t.
|
|
|
|
this descriptor
|
|
|
|
*/
|
|
|
|
uint8_t order;
|
2005-06-30 09:50:55 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_base_descriptor_t mca_btl_base_descriptor_t;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_DECLSPEC OBJ_CLASS_DECLARATION(mca_btl_base_descriptor_t);
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2008-05-30 07:58:39 +04:00
|
|
|
#define MCA_BTL_DES_FLAGS_PRIORITY 0x0001
|
|
|
|
/* Allow the BTL to dispose the descriptor once the callback
|
|
|
|
* associated was triggered.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_FLAGS_BTL_OWNERSHIP 0x0002
|
|
|
|
/* Allow the BTL to avoid calling the descriptor callback
|
|
|
|
* if the send succeded in the btl_send (i.e in the fast path).
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_SEND_ALWAYS_CALLBACK 0x0004
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2013-01-18 02:34:43 +04:00
|
|
|
/* Tell the PML that the copy is being done asynchronously
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_FLAGS_CUDA_COPY_ASYNC 0x0008
|
|
|
|
|
2011-12-01 01:37:23 +04:00
|
|
|
/* Type of transfer that will be done with this frag.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_FLAGS_PUT 0x0010
|
|
|
|
#define MCA_BTL_DES_FLAGS_GET 0x0020
|
|
|
|
|
2013-06-12 01:52:20 +04:00
|
|
|
/* Ask the BTL to wake the remote process (send/sendi) or local process
|
|
|
|
* (put/get) to handle this message. The BTL may ignore this flag if
|
|
|
|
* signaled operations are not supported.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_FLAGS_SIGNAL 0x0040
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* Maximum number of allowed segments in src/dst fields of a descriptor.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_DES_MAX_SEGMENTS 16
|
|
|
|
|
2012-06-21 21:09:12 +04:00
|
|
|
/**
|
|
|
|
* Maximum size of a BTL segment (NTH: does it really save us anything
|
|
|
|
* to hardcode this?)
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_SEG_MAX_SIZE 256
|
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/**
|
|
|
|
* Maximum size of a BTL registration handle in bytes
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_REG_HANDLE_MAX_SIZE 256
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/*
|
|
|
|
* BTL base header, stores the tag at a minimum
|
|
|
|
*/
|
|
|
|
struct mca_btl_base_header_t{
|
|
|
|
mca_btl_base_tag_t tag;
|
|
|
|
};
|
|
|
|
typedef struct mca_btl_base_header_t mca_btl_base_header_t;
|
|
|
|
|
2006-02-26 03:45:54 +03:00
|
|
|
#define MCA_BTL_BASE_HEADER_HTON(hdr)
|
|
|
|
#define MCA_BTL_BASE_HEADER_NTOH(hdr)
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/*
|
|
|
|
* BTL component interface functions and datatype.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* MCA->BTL Initializes the BTL component and creates specific BTL
|
|
|
|
* module(s).
|
|
|
|
*
|
|
|
|
* @param num_btls (OUT) Returns the number of btl modules created, or 0
|
|
|
|
* if the transport is not available.
|
|
|
|
*
|
|
|
|
* @param enable_progress_threads (IN) Whether this component is
|
|
|
|
* allowed to run a hidden/progress thread or not.
|
|
|
|
*
|
|
|
|
* @param enable_mpi_threads (IN) Whether support for multiple MPI
|
|
|
|
* threads is enabled or not (i.e., MPI_THREAD_MULTIPLE), which
|
|
|
|
* indicates whether multiple threads may invoke this component
|
|
|
|
* simultaneously or not.
|
|
|
|
*
|
|
|
|
* @return Array of pointers to BTL modules, or NULL if the transport
|
|
|
|
* is not available.
|
|
|
|
*
|
|
|
|
* During component initialization, the BTL component should discover
|
|
|
|
* the physical devices that are available for the given transport,
|
|
|
|
* and create a BTL module to represent each device. Any addressing
|
|
|
|
* information required by peers to reach the device should be published
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* during this function via the modex_send() interface.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct mca_btl_base_module_t** (*mca_btl_base_component_init_fn_t)(
|
|
|
|
int *num_btls,
|
|
|
|
bool enable_progress_threads,
|
|
|
|
bool enable_mpi_threads
|
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* MCA->BTL Called to progress outstanding requests for
|
|
|
|
* non-threaded polling environments.
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* @return Count of "completions", a metric of
|
|
|
|
* how many items where completed in the call
|
|
|
|
* to progress.
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
typedef int (*mca_btl_base_component_progress_fn_t)(void);
|
|
|
|
|
|
|
|
|
2008-01-15 08:32:53 +03:00
|
|
|
/**
|
|
|
|
* Callback function that is called asynchronously on receipt
|
|
|
|
* of data by the transport layer.
|
|
|
|
* Note that the the mca_btl_base_descriptor_t is only valid within the
|
|
|
|
* completion function, this implies that all data payload in the
|
|
|
|
* mca_btl_base_descriptor_t must be copied out within this callback or
|
|
|
|
* forfeited back to the BTL.
|
2014-10-30 20:40:43 +03:00
|
|
|
* Note also that descriptor segments (des_segments) must be base
|
2012-06-21 21:09:12 +04:00
|
|
|
* segments for all callbacks.
|
2008-01-15 08:32:53 +03:00
|
|
|
*
|
|
|
|
* @param[IN] btl BTL module
|
|
|
|
* @param[IN] tag The active message receive callback tag value
|
|
|
|
* @param[IN] descriptor The BTL descriptor (contains the receive payload)
|
|
|
|
* @param[IN] cbdata Opaque callback data
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef void (*mca_btl_base_module_recv_cb_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_tag_t tag,
|
|
|
|
mca_btl_base_descriptor_t* descriptor,
|
|
|
|
void* cbdata
|
|
|
|
);
|
|
|
|
|
|
|
|
typedef struct mca_btl_active_message_callback_t {
|
|
|
|
mca_btl_base_module_recv_cb_fn_t cbfunc;
|
|
|
|
void* cbdata;
|
|
|
|
} mca_btl_active_message_callback_t;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_DECLSPEC extern
|
2008-01-15 08:32:53 +03:00
|
|
|
mca_btl_active_message_callback_t mca_btl_base_active_message_trigger[MCA_BTL_TAG_MAX];
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* BTL component descriptor. Contains component version information
|
|
|
|
* and component open/close/init functions.
|
|
|
|
*/
|
|
|
|
|
2008-07-29 02:40:57 +04:00
|
|
|
struct mca_btl_base_component_2_0_0_t {
|
2005-06-30 09:50:55 +04:00
|
|
|
mca_base_component_t btl_version;
|
2008-07-29 02:40:57 +04:00
|
|
|
mca_base_component_data_t btl_data;
|
2005-06-30 09:50:55 +04:00
|
|
|
mca_btl_base_component_init_fn_t btl_init;
|
|
|
|
mca_btl_base_component_progress_fn_t btl_progress;
|
|
|
|
};
|
2008-07-29 02:40:57 +04:00
|
|
|
typedef struct mca_btl_base_component_2_0_0_t mca_btl_base_component_2_0_0_t;
|
|
|
|
typedef struct mca_btl_base_component_2_0_0_t mca_btl_base_component_t;
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2007-07-25 21:26:23 +04:00
|
|
|
/* add the 1_0_0_t typedef for source compatibility
|
|
|
|
* we can do this safely because 1_0_0 components are the same as
|
2006-08-22 20:25:36 +04:00
|
|
|
* 1_0_1 components, the difference is in the btl module.
|
|
|
|
* Fortunately the only difference in the module is an additional interface
|
|
|
|
* function added to 1_0_1. We can therefore safely treat an older module just
|
|
|
|
* just like the new one so long as we check the component version
|
|
|
|
* prior to invoking the new interface function.
|
|
|
|
*/
|
2008-07-29 02:40:57 +04:00
|
|
|
typedef struct mca_btl_base_component_2_0_0_t mca_btl_base_component_1_0_1_t;
|
|
|
|
typedef struct mca_btl_base_component_2_0_0_t mca_btl_base_component_1_0_0_t;
|
2006-08-22 20:25:36 +04:00
|
|
|
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* BTL module interface functions and datatype.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* MCA->BTL Clean up any resources held by BTL module
|
|
|
|
* before the module is unloaded.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module.
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @return OPAL_SUCCESS or error status on failure.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
* Prior to unloading a BTL module, the MCA framework will call
|
|
|
|
* the BTL finalize method of the module. Any resources held by
|
|
|
|
* the BTL should be released and if required the memory corresponding
|
|
|
|
* to the BTL module freed.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_finalize_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl
|
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
2007-07-25 21:26:23 +04:00
|
|
|
* BML->BTL notification of change in the process list.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param nprocs (IN) Number of processes
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param procs (IN) Array of processes
|
|
|
|
* @param endpoint (OUT) Array of mca_btl_base_endpoint_t structures by BTL.
|
2005-06-30 09:50:55 +04:00
|
|
|
* @param reachable (OUT) Bitmask indicating set of peer processes that are reachable by this BTL.
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @return OPAL_SUCCESS or error status on failure.
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* The mca_btl_base_module_add_procs_fn_t() is called by the BML to
|
2005-06-30 09:50:55 +04:00
|
|
|
* determine the set of BTLs that should be used to reach each process.
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* Any addressing information exported by the peer via the modex_send()
|
2005-06-30 09:50:55 +04:00
|
|
|
* function should be available during this call via the corresponding
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* modex_recv() function. The BTL may utilize this information to
|
2005-06-30 09:50:55 +04:00
|
|
|
* determine reachability of each peer process.
|
|
|
|
*
|
|
|
|
* For each process that is reachable by the BTL, the bit corresponding to the index
|
2007-07-25 21:26:23 +04:00
|
|
|
* into the proc array (nprocs) should be set in the reachable bitmask. The BTL
|
|
|
|
* will return an array of pointers to a data structure defined
|
|
|
|
* by the BTL that is then returned to the BTL on subsequent calls to the BTL data
|
2005-06-30 09:50:55 +04:00
|
|
|
* transfer functions (e.g btl_send). This may be used by the BTL to cache any addressing
|
2007-04-27 01:03:38 +04:00
|
|
|
* or connection information (e.g. TCP socket, IB queue pair).
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_add_procs_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t** procs,
|
2005-06-30 09:50:55 +04:00
|
|
|
struct mca_btl_base_endpoint_t** endpoints,
|
2009-03-04 01:25:13 +03:00
|
|
|
struct opal_bitmap_t* reachable
|
2005-06-30 09:50:55 +04:00
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Notification of change to the process list.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param nprocs (IN) Number of processes
|
|
|
|
* @param proc (IN) Set of processes
|
|
|
|
* @param peer (IN) Set of peer addressing information.
|
|
|
|
* @return Status indicating if cleanup was successful
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* When the process list changes, the BML notifies the BTL of the
|
2005-06-30 09:50:55 +04:00
|
|
|
* change, to provide the opportunity to cleanup or release any
|
|
|
|
* resources associated with the peer.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_del_procs_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t** procs,
|
2007-04-27 01:03:38 +04:00
|
|
|
struct mca_btl_base_endpoint_t** peer
|
2005-06-30 09:50:55 +04:00
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Register a callback function that is called on receipt
|
|
|
|
* of a fragment.
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param[IN] btl BTL module
|
|
|
|
* @param[IN] tag tag value of this callback
|
|
|
|
* (specified on subsequent send operations)
|
|
|
|
* @param[IN] cbfunc The callback function
|
|
|
|
* @param[IN] cbdata Opaque callback data
|
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @return OPAL_SUCCESS The callback was registered successfully
|
|
|
|
* @return OPAL_ERROR The callback was NOT registered successfully
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_register_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_tag_t tag,
|
|
|
|
mca_btl_base_module_recv_cb_fn_t cbfunc,
|
|
|
|
void* cbdata
|
|
|
|
);
|
|
|
|
|
|
|
|
|
2006-08-17 00:21:38 +04:00
|
|
|
/**
|
|
|
|
* Callback function that is called asynchronously on receipt
|
|
|
|
* of an error from the transport layer
|
2007-07-25 21:26:23 +04:00
|
|
|
*
|
2010-05-19 15:55:45 +04:00
|
|
|
* @param[IN] btl BTL module
|
|
|
|
* @param[IN] flags type of error
|
|
|
|
* @param[IN] errproc process that had an error
|
|
|
|
* @param[IN] btlinfo descriptive string from the BTL
|
2006-08-17 00:21:38 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
typedef void (*mca_btl_base_module_error_cb_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
2010-05-19 15:55:45 +04:00
|
|
|
int32_t flags,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t* errproc,
|
2010-05-19 15:55:45 +04:00
|
|
|
char* btlinfo
|
2006-08-17 00:21:38 +04:00
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Register a callback function that is called on receipt
|
|
|
|
* of an error.
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param[IN] btl BTL module
|
|
|
|
* @param[IN] cbfunc The callback function
|
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @return OPAL_SUCCESS The callback was registered successfully
|
|
|
|
* @return OPAL_ERROR The callback was NOT registered successfully
|
2006-08-17 00:21:38 +04:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_register_error_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_module_error_cb_fn_t cbfunc
|
|
|
|
);
|
|
|
|
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* Allocate a descriptor with a segment of the requested size.
|
|
|
|
* Note that the BTL layer may choose to return a smaller size
|
2007-07-25 21:26:23 +04:00
|
|
|
* if it cannot support the request. The order tag value ensures that
|
|
|
|
* operations on the descriptor that is allocated will be
|
|
|
|
* ordered w.r.t. a previous operation on a particular descriptor.
|
|
|
|
* Ordering is only guaranteed if the previous descriptor had its
|
|
|
|
* local completion callback function called and the order tag of
|
|
|
|
* that descriptor is only valid upon the local completion callback function.
|
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param size (IN) Request segment size.
|
2007-05-24 23:51:26 +04:00
|
|
|
* @param order (IN) The ordering tag (may be MCA_BTL_NO_ORDER)
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
2007-05-24 23:51:26 +04:00
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
typedef mca_btl_base_descriptor_t* (*mca_btl_base_module_alloc_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
2007-12-09 17:00:42 +03:00
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
2007-05-24 23:51:26 +04:00
|
|
|
uint8_t order,
|
2007-12-09 17:08:01 +03:00
|
|
|
size_t size,
|
|
|
|
uint32_t flags
|
2005-06-30 09:50:55 +04:00
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Return a descriptor allocated from this BTL via alloc/prepare.
|
2007-07-25 21:26:23 +04:00
|
|
|
* A descriptor can only be deallocated after its local completion
|
|
|
|
* callback function has called for all send/put/get operations.
|
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param segment (IN) Descriptor allocated from the BTL
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_free_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_descriptor_t* descriptor
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
2007-07-25 21:26:23 +04:00
|
|
|
* Prepare a descriptor for send/put/get using the supplied
|
2007-04-27 01:03:38 +04:00
|
|
|
* convertor. If the convertor references data that is contiguous,
|
2005-06-30 09:50:55 +04:00
|
|
|
* the descriptor may simply point to the user buffer. Otherwise,
|
|
|
|
* this routine is responsible for allocating buffer space and
|
|
|
|
* packing if required.
|
|
|
|
*
|
2007-07-25 21:26:23 +04:00
|
|
|
* The descriptor returned can be used in multiple concurrent operations
|
|
|
|
* (send/put/get) unless the BTL has the MCA_BTL_FLAGS_RDMA_MATCHED flag set
|
|
|
|
* in which case a corresponding prepare call must accompany the put/get call
|
|
|
|
* in addition, the address and length that is put/get must match the address
|
|
|
|
* and length which is prepared.
|
|
|
|
*
|
|
|
|
* The order tag value ensures that operations on the
|
|
|
|
* descriptor that is prepared will be ordered w.r.t. a previous
|
|
|
|
* operation on a particular descriptor. Ordering is only guaranteed if
|
|
|
|
* the previous descriptor had its local completion callback function
|
|
|
|
* called and the order tag of that descriptor is only valid upon the local
|
|
|
|
* completion callback function.
|
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL peer addressing
|
2007-04-27 01:03:38 +04:00
|
|
|
* @param registration (IN) Memory registration
|
2005-06-30 09:50:55 +04:00
|
|
|
* @param convertor (IN) Data type convertor
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param order (IN) The ordering tag (may be MCA_BTL_NO_ORDER)
|
2005-06-30 09:50:55 +04:00
|
|
|
* @param reserve (IN) Additional bytes requested by upper layer to precede user data
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param size (IN/OUT) Number of bytes to prepare (IN),
|
|
|
|
* number of bytes actually prepared (OUT)
|
|
|
|
*
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
typedef struct mca_btl_base_descriptor_t* (*mca_btl_base_module_prepare_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
struct opal_convertor_t* convertor,
|
2007-05-24 23:51:26 +04:00
|
|
|
uint8_t order,
|
2005-06-30 09:50:55 +04:00
|
|
|
size_t reserve,
|
2007-12-09 17:08:01 +03:00
|
|
|
size_t* size,
|
|
|
|
uint32_t flags
|
2005-06-30 09:50:55 +04:00
|
|
|
);
|
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/**
|
|
|
|
* @brief Register a memory region for put/get/atomic operations.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint(IN) BTL addressing information (or NULL for all endpoints)
|
|
|
|
* @param base (IN) Pointer to start of region
|
|
|
|
* @param size (IN) Size of region
|
|
|
|
* @param flags (IN) Flags indicating what operation will be performed. Valid
|
|
|
|
* values are MCA_BTL_DES_FLAGS_PUT, MCA_BTL_DES_FLAGS_GET,
|
|
|
|
* and MCA_BTL_DES_FLAGS_ATOMIC
|
|
|
|
*
|
|
|
|
* @returns a memory registration handle valid for both local and remote operations
|
|
|
|
* @returns NULL if the region could not be registered
|
|
|
|
*
|
|
|
|
* This function registers the specified region with the hardware for use with
|
|
|
|
* the btl_put, btl_get, btl_atomic_cas, btl_atomic_op, and btl_atomic_fop
|
|
|
|
* functions. Care should be taken to not hold an excessive number of registrations
|
|
|
|
* as they may use limited system/NIC resources.
|
|
|
|
*/
|
|
|
|
typedef struct mca_btl_base_registration_handle_t *(*mca_btl_base_module_register_mem_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl, struct mca_btl_base_endpoint_t *endpoint, void *base,
|
|
|
|
size_t size, uint32_t flags);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @brief Deregister a memory region
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module region was registered with
|
|
|
|
* @param handle (IN) BTL registration handle to deregister
|
|
|
|
*
|
|
|
|
* This function deregisters the memory region associated with the specified handle. Care
|
|
|
|
* should be taken to not perform any RDMA or atomic operation on this memory region
|
|
|
|
* after it is deregistered. It is erroneous to specify a memory handle associated with
|
|
|
|
* a remote node.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_deregister_mem_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl, struct mca_btl_base_registration_handle_t *handle);
|
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* Initiate an asynchronous send.
|
2007-07-25 21:26:23 +04:00
|
|
|
* Completion Semantics: the descriptor has been queued for a send operation
|
|
|
|
* the BTL now controls the descriptor until local
|
|
|
|
* completion callback is made on the descriptor
|
|
|
|
*
|
|
|
|
* All BTLs allow multiple concurrent asynchronous send operations on a descriptor
|
2005-06-30 09:50:55 +04:00
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param descriptor (IN) Description of the data to be transfered
|
|
|
|
* @param tag (IN) The tag value used to notify the peer.
|
2007-07-25 21:26:23 +04:00
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @retval OPAL_SUCCESS The descriptor was successfully queued for a send
|
|
|
|
* @retval OPAL_ERROR The descriptor was NOT successfully queued for a send
|
|
|
|
* @retval OPAL_ERR_UNREACH The endpoint is not reachable
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_send_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct mca_btl_base_descriptor_t* descriptor,
|
|
|
|
mca_btl_base_tag_t tag
|
|
|
|
);
|
|
|
|
|
2008-05-30 07:58:39 +04:00
|
|
|
/**
|
|
|
|
* Initiate an immediate blocking send.
|
|
|
|
* Completion Semantics: the BTL will make a best effort
|
|
|
|
* to send the header and "size" bytes from the datatype using the convertor.
|
|
|
|
* The header is guaranteed to be delivered entirely in the first segment.
|
|
|
|
* Should the BTL be unable to deliver the data due to resource constraints
|
|
|
|
* the BTL will return a descriptor (via the OUT param)
|
|
|
|
* of size "payload_size + header_size".
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param convertor (IN) Data type convertor
|
|
|
|
* @param header (IN) Pointer to header.
|
|
|
|
* @param header_size (IN) Size of header.
|
|
|
|
* @param payload_size (IN) Size of payload (from convertor).
|
|
|
|
* @param order (IN) The ordering tag (may be MCA_BTL_NO_ORDER)
|
2009-02-26 21:10:50 +03:00
|
|
|
* @param flags (IN) Flags.
|
2008-05-30 07:58:39 +04:00
|
|
|
* @param tag (IN) The tag value used to notify the peer.
|
|
|
|
* @param descriptor (OUT) The descriptor to be returned unable to be sent immediately
|
2014-11-05 00:09:37 +03:00
|
|
|
* (may be NULL).
|
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @retval OPAL_SUCCESS The send was successfully queued
|
|
|
|
* @retval OPAL_ERROR The send failed
|
|
|
|
* @retval OPAL_ERR_UNREACH The endpoint is not reachable
|
|
|
|
* @retval OPAL_ERR_RESOURCE_BUSY The BTL is busy a descriptor will be returned
|
2008-05-30 07:58:39 +04:00
|
|
|
* (via the OUT param) if descriptors are available
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef int (*mca_btl_base_module_sendi_fn_t)(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
- Split the datatype engine into two parts: an MPI specific part in
OMPI
and a language agnostic part in OPAL. The convertor is completely
moved into OPAL. This offers several benefits as described in RFC
http://www.open-mpi.org/community/lists/devel/2009/07/6387.php
namely:
- Fewer basic types (int* and float* types, boolean and wchar
- Fixing naming scheme to ompi-nomenclature.
- Usability outside of the ompi-layer.
- Due to the fixed nature of simple opal types, their information is
completely
known at compile time and therefore constified
- With fewer datatypes (22), the actual sizes of bit-field types may be
reduced
from 64 to 32 bits, allowing reorganizing the opal_datatype
structure, eliminating holes and keeping data required in convertor
(upon send/recv) in one cacheline...
This has implications to the convertor-datastructure and other parts
of the code.
- Several performance tests have been run, the netpipe latency does not
change with
this patch on Linux/x86-64 on the smoky cluster.
- Extensive tests have been done to verify correctness (no new
regressions) using:
1. mpi_test_suite on linux/x86-64 using clean ompi-trunk and
ompi-ddt:
a. running both trunk and ompi-ddt resulted in no differences
(except for MPI_SHORT_INT and MPI_TYPE_MIX_LB_UB do now run
correctly).
b. with --enable-memchecker and running under valgrind (one buglet
when run with static found in test-suite, commited)
2. ibm testsuite on linux/x86-64 using clean ompi-trunk and ompi-ddt:
all passed (except for the dynamic/ tests failed!! as trunk/MTT)
3. compilation and usage of HDF5 tests on Jaguar using PGI and
PathScale compilers.
4. compilation and usage on Scicortex.
- Please note, that for the heterogeneous case, (-m32 compiled
binaries/ompi), neither
ompi-trunk, nor ompi-ddt branch would successfully launch.
This commit was SVN r21641.
2009-07-13 08:56:31 +04:00
|
|
|
struct opal_convertor_t* convertor,
|
2008-05-30 07:58:39 +04:00
|
|
|
void* header,
|
|
|
|
size_t header_size,
|
|
|
|
size_t payload_size,
|
|
|
|
uint8_t order,
|
|
|
|
uint32_t flags,
|
|
|
|
mca_btl_base_tag_t tag,
|
|
|
|
mca_btl_base_descriptor_t** descriptor
|
|
|
|
);
|
2005-06-30 09:50:55 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Initiate an asynchronous put.
|
2014-10-07 19:25:56 +04:00
|
|
|
* Completion Semantics: if this function returns a 1 then the operation
|
|
|
|
* is complete. a return of OPAL_SUCCESS indicates
|
|
|
|
* the put operation has been queued with the
|
|
|
|
* network. the local_handle can not be deregistered
|
|
|
|
* until all outstanding operations on that handle
|
|
|
|
* have been completed.
|
2007-07-25 21:26:23 +04:00
|
|
|
*
|
2014-10-07 19:25:56 +04:00
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param local_address (IN) Local address to put from (registered)
|
|
|
|
* @param remote_address (IN) Remote address to put to (registered remotely)
|
|
|
|
* @param local_handle (IN) Registration handle for region containing
|
|
|
|
* (local_address, local_address + size)
|
|
|
|
* @param remote_handle (IN) Remote registration handle for region containing
|
|
|
|
* (remote_address, remote_address + size)
|
|
|
|
* @param size (IN) Number of bytes to put
|
|
|
|
* @param flags (IN) Flags for this put operation
|
2014-10-31 01:43:41 +03:00
|
|
|
* @param order (IN) Ordering
|
2014-10-07 19:25:56 +04:00
|
|
|
* @param cbfunc (IN) Function to call on completion (if queued)
|
|
|
|
* @param cbcontext (IN) Context for the callback
|
|
|
|
* @param cbdata (IN) Data for callback
|
2014-10-31 01:43:41 +03:00
|
|
|
*
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @retval OPAL_SUCCESS The descriptor was successfully queued for a put
|
|
|
|
* @retval OPAL_ERROR The descriptor was NOT successfully queued for a put
|
2014-10-07 19:25:56 +04:00
|
|
|
* @retval OPAL_ERR_OUT_OF_RESOURCE Insufficient resources to queue the put
|
|
|
|
* operation. Try again later
|
|
|
|
* @retval OPAL_ERR_NOT_AVAILABLE Put can not be performed due to size or
|
|
|
|
* alignment restrictions.
|
2005-06-30 09:50:55 +04:00
|
|
|
*/
|
2014-10-07 19:25:56 +04:00
|
|
|
typedef int (*mca_btl_base_module_put_fn_t) (struct mca_btl_base_module_t *btl,
|
|
|
|
struct mca_btl_base_endpoint_t *endpoint, void *local_address,
|
|
|
|
uint64_t remote_address, struct mca_btl_base_registration_handle_t *local_handle,
|
|
|
|
struct mca_btl_base_registration_handle_t *remote_handle, size_t size, int flags,
|
2014-10-31 01:43:41 +03:00
|
|
|
int order, mca_btl_base_rdma_completion_fn_t cbfunc, void *cbcontext, void *cbdata);
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/**
|
|
|
|
* Initiate an asynchronous get.
|
|
|
|
* Completion Semantics: if this function returns a 1 then the operation
|
|
|
|
* is complete. a return of OPAL_SUCCESS indicates
|
|
|
|
* the get operation has been queued with the
|
|
|
|
* network. the local_handle can not be deregistered
|
|
|
|
* until all outstanding operations on that handle
|
|
|
|
* have been completed.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param local_address (IN) Local address to put from (registered)
|
|
|
|
* @param remote_address (IN) Remote address to put to (registered remotely)
|
|
|
|
* @param local_handle (IN) Registration handle for region containing
|
|
|
|
* (local_address, local_address + size)
|
|
|
|
* @param remote_handle (IN) Remote registration handle for region containing
|
|
|
|
* (remote_address, remote_address + size)
|
|
|
|
* @param size (IN) Number of bytes to put
|
|
|
|
* @param flags (IN) Flags for this put operation
|
2014-10-31 01:43:41 +03:00
|
|
|
* @param order (IN) Ordering
|
2014-10-07 19:25:56 +04:00
|
|
|
* @param cbfunc (IN) Function to call on completion (if queued)
|
|
|
|
* @param cbcontext (IN) Context for the callback
|
|
|
|
* @param cbdata (IN) Data for callback
|
|
|
|
*
|
|
|
|
* @retval OPAL_SUCCESS The descriptor was successfully queued for a put
|
|
|
|
* @retval OPAL_ERROR The descriptor was NOT successfully queued for a put
|
|
|
|
* @retval OPAL_ERR_OUT_OF_RESOURCE Insufficient resources to queue the put
|
|
|
|
* operation. Try again later
|
|
|
|
* @retval OPAL_ERR_NOT_AVAILABLE Put can not be performed due to size or
|
|
|
|
* alignment restrictions.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_get_fn_t) (struct mca_btl_base_module_t *btl,
|
|
|
|
struct mca_btl_base_endpoint_t *endpoint, void *local_address,
|
|
|
|
uint64_t remote_address, struct mca_btl_base_registration_handle_t *local_handle,
|
|
|
|
struct mca_btl_base_registration_handle_t *remote_handle, size_t size, int flags,
|
2014-10-31 01:43:41 +03:00
|
|
|
int order, mca_btl_base_rdma_completion_fn_t cbfunc, void *cbcontext, void *cbdata);
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2014-11-02 23:56:14 +03:00
|
|
|
/**
|
|
|
|
* Initiate an asynchronous atomic operation.
|
|
|
|
* Completion Semantics: if this function returns a 1 then the operation
|
|
|
|
* is complete. a return of OPAL_SUCCESS indicates
|
|
|
|
* the atomic operation has been queued with the
|
|
|
|
* network.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param remote_address (IN) Remote address to put to (registered remotely)
|
|
|
|
* @param remote_handle (IN) Remote registration handle for region containing
|
|
|
|
* (remote_address, remote_address + 8)
|
|
|
|
* @param op (IN) Operation to perform
|
|
|
|
* @param operand (IN) Operand for the operation
|
|
|
|
* @param flags (IN) Flags for this put operation
|
|
|
|
* @param order (IN) Ordering
|
|
|
|
* @param cbfunc (IN) Function to call on completion (if queued)
|
|
|
|
* @param cbcontext (IN) Context for the callback
|
|
|
|
* @param cbdata (IN) Data for callback
|
|
|
|
*
|
|
|
|
* @retval OPAL_SUCCESS The operation was successfully queued
|
|
|
|
* @retval 1 The operation is complete
|
|
|
|
* @retval OPAL_ERROR The operation was NOT successfully queued
|
|
|
|
* @retval OPAL_ERR_OUT_OF_RESOURCE Insufficient resources to queue the atomic
|
|
|
|
* operation. Try again later
|
|
|
|
* @retval OPAL_ERR_NOT_AVAILABLE Atomic operation can not be performed due to
|
|
|
|
* alignment restrictions or the operation {op} is not supported
|
|
|
|
* by the hardware.
|
|
|
|
*
|
|
|
|
* After the operation is complete the remote address specified by {remote_address} and
|
|
|
|
* {remote_handle} will be updated with (*remote_address) = (*remote_address) op operand.
|
|
|
|
* The btl will guarantee consistency of atomic operations performed via the btl. Note,
|
|
|
|
* however, that not all btls will provide consistency between btl atomic operations and
|
|
|
|
* cpu atomics.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_atomic_op64_fn_t) (struct mca_btl_base_module_t *btl,
|
|
|
|
struct mca_btl_base_endpoint_t *endpoint, uint64_t remote_address,
|
|
|
|
struct mca_btl_base_registration_handle_t *remote_handle, mca_btl_base_atomic_op_t op,
|
|
|
|
uint64_t operand, int flags, int order, mca_btl_base_rdma_completion_fn_t cbfunc,
|
|
|
|
void *cbcontext, void *cbdata);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Initiate an asynchronous fetching atomic operation.
|
|
|
|
* Completion Semantics: if this function returns a 1 then the operation
|
|
|
|
* is complete. a return of OPAL_SUCCESS indicates
|
|
|
|
* the atomic operation has been queued with the
|
|
|
|
* network.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param local_address (OUT) Local address to store the result in
|
|
|
|
* @param remote_address (IN) Remote address perfom operation on to (registered remotely)
|
|
|
|
* @param local_handle (IN) Local registration handle for region containing
|
|
|
|
* (local_address, local_address + 8)
|
|
|
|
* @param remote_handle (IN) Remote registration handle for region containing
|
|
|
|
* (remote_address, remote_address + 8)
|
|
|
|
* @param op (IN) Operation to perform
|
|
|
|
* @param operand (IN) Operand for the operation
|
|
|
|
* @param flags (IN) Flags for this put operation
|
|
|
|
* @param order (IN) Ordering
|
|
|
|
* @param cbfunc (IN) Function to call on completion (if queued)
|
|
|
|
* @param cbcontext (IN) Context for the callback
|
|
|
|
* @param cbdata (IN) Data for callback
|
|
|
|
*
|
|
|
|
* @retval OPAL_SUCCESS The operation was successfully queued
|
|
|
|
* @retval 1 The operation is complete
|
|
|
|
* @retval OPAL_ERROR The operation was NOT successfully queued
|
|
|
|
* @retval OPAL_ERR_OUT_OF_RESOURCE Insufficient resources to queue the atomic
|
|
|
|
* operation. Try again later
|
|
|
|
* @retval OPAL_ERR_NOT_AVAILABLE Atomic operation can not be performed due to
|
|
|
|
* alignment restrictions or the operation {op} is not supported
|
|
|
|
* by the hardware.
|
|
|
|
*
|
|
|
|
* After the operation is complete the remote address specified by {remote_address} and
|
|
|
|
* {remote_handle} will be updated with (*remote_address) = (*remote_address) op operand.
|
|
|
|
* {local_address} will be updated with the previous value stored in {remote_address}.
|
|
|
|
* The btl will guarantee consistency of atomic operations performed via the btl. Note,
|
|
|
|
* however, that not all btls will provide consistency between btl atomic operations and
|
|
|
|
* cpu atomics.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_atomic_fop64_fn_t) (struct mca_btl_base_module_t *btl,
|
|
|
|
struct mca_btl_base_endpoint_t *endpoint, void *local_address, uint64_t remote_address,
|
|
|
|
struct mca_btl_base_registration_handle_t *local_handle,
|
|
|
|
struct mca_btl_base_registration_handle_t *remote_handle, mca_btl_base_atomic_op_t op,
|
|
|
|
uint64_t operand, int flags, int order, mca_btl_base_rdma_completion_fn_t cbfunc,
|
|
|
|
void *cbcontext, void *cbdata);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Initiate an asynchronous compare and swap operation.
|
|
|
|
* Completion Semantics: if this function returns a 1 then the operation
|
|
|
|
* is complete. a return of OPAL_SUCCESS indicates
|
|
|
|
* the atomic operation has been queued with the
|
|
|
|
* network.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param endpoint (IN) BTL addressing information
|
|
|
|
* @param local_address (OUT) Local address to store the result in
|
|
|
|
* @param remote_address (IN) Remote address perfom operation on to (registered remotely)
|
|
|
|
* @param local_handle (IN) Local registration handle for region containing
|
|
|
|
* (local_address, local_address + 8)
|
|
|
|
* @param remote_handle (IN) Remote registration handle for region containing
|
|
|
|
* (remote_address, remote_address + 8)
|
|
|
|
* @param compare (IN) Operand for the operation
|
|
|
|
* @param value (IN) Value to store on success
|
|
|
|
* @param flags (IN) Flags for this put operation
|
|
|
|
* @param order (IN) Ordering
|
|
|
|
* @param cbfunc (IN) Function to call on completion (if queued)
|
|
|
|
* @param cbcontext (IN) Context for the callback
|
|
|
|
* @param cbdata (IN) Data for callback
|
|
|
|
*
|
|
|
|
* @retval OPAL_SUCCESS The operation was successfully queued
|
|
|
|
* @retval 1 The operation is complete
|
|
|
|
* @retval OPAL_ERROR The operation was NOT successfully queued
|
|
|
|
* @retval OPAL_ERR_OUT_OF_RESOURCE Insufficient resources to queue the atomic
|
|
|
|
* operation. Try again later
|
|
|
|
* @retval OPAL_ERR_NOT_AVAILABLE Atomic operation can not be performed due to
|
|
|
|
* alignment restrictions or the operation {op} is not supported
|
|
|
|
* by the hardware.
|
|
|
|
*
|
|
|
|
* After the operation is complete the remote address specified by {remote_address} and
|
|
|
|
* {remote_handle} will be updated with {value} if *remote_address == compare.
|
|
|
|
* {local_address} will be updated with the previous value stored in {remote_address}.
|
|
|
|
* The btl will guarantee consistency of atomic operations performed via the btl. Note,
|
|
|
|
* however, that not all btls will provide consistency between btl atomic operations and
|
|
|
|
* cpu atomics.
|
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_atomic_cswap_fn_t) (struct mca_btl_base_module_t *btl,
|
|
|
|
struct mca_btl_base_endpoint_t *endpoint, void *local_address, uint64_t remote_address,
|
|
|
|
struct mca_btl_base_registration_handle_t *local_handle,
|
|
|
|
struct mca_btl_base_registration_handle_t *remote_handle, uint64_t compare,
|
|
|
|
uint64_t value, int flags, int order, mca_btl_base_rdma_completion_fn_t cbfunc,
|
|
|
|
void *cbcontext, void *cbdata);
|
|
|
|
|
2006-03-17 20:39:41 +03:00
|
|
|
/**
|
|
|
|
* Diagnostic dump of btl state.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
2007-07-25 21:26:23 +04:00
|
|
|
* @param endpoint (IN) BTL endpoint
|
|
|
|
* @param verbose (IN) Verbosity level
|
2006-03-17 20:39:41 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
typedef void (*mca_btl_base_module_dump_fn_t)(
|
2006-03-17 21:46:48 +03:00
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
int verbose
|
2006-03-17 20:39:41 +03:00
|
|
|
);
|
|
|
|
|
2007-03-17 02:11:45 +03:00
|
|
|
/**
|
|
|
|
* Fault Tolerance Event Notification Function
|
|
|
|
* @param state Checkpoint Status
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
* @return OPAL_SUCCESS or failure status
|
2007-03-17 02:11:45 +03:00
|
|
|
*/
|
|
|
|
typedef int (*mca_btl_base_module_ft_event_fn_t)(int state);
|
2006-03-17 20:39:41 +03:00
|
|
|
|
2005-06-30 09:50:55 +04:00
|
|
|
/**
|
|
|
|
* BTL module interface functions and attributes.
|
|
|
|
*/
|
|
|
|
struct mca_btl_base_module_t {
|
|
|
|
|
|
|
|
/* BTL common attributes */
|
|
|
|
mca_btl_base_component_t* btl_component; /**< pointer back to the BTL component structure */
|
|
|
|
size_t btl_eager_limit; /**< maximum size of first fragment -- eager send */
|
2007-12-16 11:35:17 +03:00
|
|
|
size_t btl_rndv_eager_limit; /**< the size of a data sent in a first fragment of rendezvous protocol */
|
2005-06-30 09:50:55 +04:00
|
|
|
size_t btl_max_send_size; /**< maximum send fragment size supported by the BTL */
|
2007-06-21 11:12:40 +04:00
|
|
|
size_t btl_rdma_pipeline_send_length; /**< amount of bytes that should be send by pipeline protocol */
|
2007-05-17 11:54:27 +04:00
|
|
|
size_t btl_rdma_pipeline_frag_size; /**< maximum rdma fragment size supported by the BTL */
|
|
|
|
size_t btl_min_rdma_pipeline_size; /**< minimum packet size for pipeline protocol */
|
2005-06-30 09:50:55 +04:00
|
|
|
uint32_t btl_exclusivity; /**< indicates this BTL should be used exclusively */
|
|
|
|
uint32_t btl_latency; /**< relative ranking of latency used to prioritize btls */
|
|
|
|
uint32_t btl_bandwidth; /**< bandwidth (Mbytes/sec) supported by each endpoint */
|
|
|
|
uint32_t btl_flags; /**< flags (put/get...) */
|
2014-11-02 23:56:14 +03:00
|
|
|
uint32_t btl_atomic_flags; /**< atomic operations supported (add, and, xor, etc) */
|
2014-10-07 19:25:56 +04:00
|
|
|
size_t btl_registration_handle_size; /**< size of the BTLs registration handles */
|
|
|
|
|
|
|
|
/* One-sided limitations (0 for no alignment, SIZE_MAX for no limit ) */
|
|
|
|
size_t btl_get_limit; /**< maximum size supported by the btl_get function */
|
|
|
|
size_t btl_get_alignment; /**< minimum alignment/size needed by btl_get (power of 2) */
|
|
|
|
size_t btl_put_limit; /**< maximum size supported by the btl_put function */
|
|
|
|
size_t btl_put_alignment; /**< minimum alignment/size needed by btl_put (power of 2) */
|
2005-06-30 09:50:55 +04:00
|
|
|
|
|
|
|
/* BTL function table */
|
2006-08-17 00:21:38 +04:00
|
|
|
mca_btl_base_module_add_procs_fn_t btl_add_procs;
|
|
|
|
mca_btl_base_module_del_procs_fn_t btl_del_procs;
|
|
|
|
mca_btl_base_module_register_fn_t btl_register;
|
|
|
|
mca_btl_base_module_finalize_fn_t btl_finalize;
|
2005-06-30 09:50:55 +04:00
|
|
|
|
2008-05-30 07:58:39 +04:00
|
|
|
mca_btl_base_module_alloc_fn_t btl_alloc;
|
|
|
|
mca_btl_base_module_free_fn_t btl_free;
|
|
|
|
mca_btl_base_module_prepare_fn_t btl_prepare_src;
|
|
|
|
mca_btl_base_module_send_fn_t btl_send;
|
|
|
|
mca_btl_base_module_sendi_fn_t btl_sendi;
|
|
|
|
mca_btl_base_module_put_fn_t btl_put;
|
|
|
|
mca_btl_base_module_get_fn_t btl_get;
|
|
|
|
mca_btl_base_module_dump_fn_t btl_dump;
|
2014-10-07 19:25:56 +04:00
|
|
|
|
2014-11-02 23:56:14 +03:00
|
|
|
/* atomic operations */
|
|
|
|
mca_btl_base_module_atomic_op64_fn_t btl_atomic_op;
|
|
|
|
mca_btl_base_module_atomic_fop64_fn_t btl_atomic_fop;
|
|
|
|
mca_btl_base_module_atomic_cswap_fn_t btl_atomic_cswap;
|
|
|
|
|
2014-10-07 19:25:56 +04:00
|
|
|
/* new memory registration functions */
|
|
|
|
mca_btl_base_module_register_mem_fn_t btl_register_mem; /**< memory registration function (NULL if not needed) */
|
|
|
|
mca_btl_base_module_deregister_mem_fn_t btl_deregister_mem; /**< memory deregistration function (NULL if not needed) */
|
|
|
|
|
2007-07-25 21:26:23 +04:00
|
|
|
/** the mpool associated with this btl (optional) */
|
2005-09-13 02:28:23 +04:00
|
|
|
mca_mpool_base_module_t* btl_mpool;
|
2007-07-25 21:26:23 +04:00
|
|
|
/** register a default error handler */
|
2006-08-18 02:02:01 +04:00
|
|
|
mca_btl_base_module_register_error_fn_t btl_register_error;
|
2007-07-25 21:26:23 +04:00
|
|
|
/** fault tolerant even notification */
|
2007-03-17 02:11:45 +03:00
|
|
|
mca_btl_base_module_ft_event_fn_t btl_ft_event;
|
2013-12-06 18:35:10 +04:00
|
|
|
#if OPAL_CUDA_GDR_SUPPORT
|
2013-11-13 17:22:39 +04:00
|
|
|
size_t btl_cuda_eager_limit; /**< switch from eager to RDMA */
|
|
|
|
size_t btl_cuda_rdma_limit; /**< switch from RDMA to rndv pipeline */
|
2013-12-06 18:35:10 +04:00
|
|
|
#endif /* OPAL_CUDA_GDR_SUPPORT */
|
2005-06-30 09:50:55 +04:00
|
|
|
};
|
|
|
|
typedef struct mca_btl_base_module_t mca_btl_base_module_t;
|
|
|
|
|
2008-07-29 02:40:57 +04:00
|
|
|
/*
|
2014-07-10 20:31:15 +04:00
|
|
|
* Macro for use in modules that are of type btl v3.0.0
|
|
|
|
* NOTE: This is not the final version of 3.0.0. Consider it
|
|
|
|
* alpha until this comment is removed.
|
|
|
|
*/
|
|
|
|
#define MCA_BTL_BASE_VERSION_3_0_0 \
|
|
|
|
MCA_BASE_VERSION_2_0_0, \
|
|
|
|
.mca_type_name = "btl", \
|
|
|
|
.mca_type_major_version = 3, \
|
|
|
|
.mca_type_minor_version = 0, \
|
|
|
|
.mca_type_release_version = 0
|
|
|
|
|
|
|
|
#define MCA_BTL_DEFAULT_VERSION(name) \
|
|
|
|
MCA_BTL_BASE_VERSION_3_0_0, \
|
|
|
|
.mca_component_name = name, \
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
.mca_component_major_version = OPAL_MAJOR_VERSION, \
|
|
|
|
.mca_component_minor_version = OPAL_MINOR_VERSION, \
|
|
|
|
.mca_component_release_version = OPAL_RELEASE_VERSION \
|
2008-07-29 02:40:57 +04:00
|
|
|
|
2008-05-30 07:58:39 +04:00
|
|
|
END_C_DECLS
|
2006-08-24 20:38:08 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#endif /* OPAL_MCA_BTL_H */
|