2014-04-14 23:29:26 +04:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2012-02-24 06:13:33 +04:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004-2011 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2014-10-16 05:47:32 +04:00
|
|
|
* Copyright (c) 2004-2014 The University of Tennessee and The University
|
2012-02-24 06:13:33 +04:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
|
|
|
* Copyright (c) 2004-2007 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
* Copyright (c) 2006-2007 Voltaire. All rights reserved.
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
* Copyright (c) 2009-2012 Cisco Systems, Inc. All rights reserved.
|
2014-04-14 23:29:26 +04:00
|
|
|
* Copyright (c) 2010-2014 Los Alamos National Security, LLC.
|
2012-02-24 06:13:33 +04:00
|
|
|
* All rights reserved.
|
2014-12-17 22:21:13 +03:00
|
|
|
* Copyright (c) 2012-2014 NVIDIA Corporation. All rights reserved.
|
2012-07-03 22:52:18 +04:00
|
|
|
* Copyright (c) 2012 Oracle and/or its affiliates. All rights reserved.
|
2014-11-12 04:00:42 +03:00
|
|
|
* Copyright (c) 2014 Research Organization for Information Science
|
|
|
|
* and Technology (RIST). All rights reserved.
|
2012-02-24 06:13:33 +04:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal_config.h"
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#ifdef HAVE_FCNTL_H
|
|
|
|
#include <fcntl.h>
|
|
|
|
#endif /* HAVE_FCNTL_H */
|
|
|
|
#include <errno.h>
|
|
|
|
#ifdef HAVE_SYS_MMAN_H
|
|
|
|
#include <sys/mman.h>
|
|
|
|
#endif /* HAVE_SYS_MMAN_H */
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#ifdef OPAL_BTL_SM_CMA_NEED_SYSCALL_DEFS
|
2013-01-14 18:42:19 +04:00
|
|
|
#include "opal/sys/cma.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#endif /* OPAL_BTL_SM_CMA_NEED_SYSCALL_DEFS */
|
2013-01-14 18:42:19 +04:00
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
#include "opal/sys/atomic.h"
|
|
|
|
#include "opal/class/opal_bitmap.h"
|
|
|
|
#include "opal/util/output.h"
|
2013-02-19 19:42:09 +04:00
|
|
|
#include "opal/util/show_help.h"
|
2013-04-18 19:34:16 +04:00
|
|
|
#include "opal/util/printf.h"
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
#include "opal/mca/hwloc/base/base.h"
|
2013-01-14 18:42:19 +04:00
|
|
|
#include "opal/mca/shmem/base/base.h"
|
|
|
|
#include "opal/mca/shmem/shmem.h"
|
2013-04-18 19:34:16 +04:00
|
|
|
#include "opal/datatype/opal_convertor.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/class/ompi_free_list.h"
|
|
|
|
#include "opal/mca/btl/btl.h"
|
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/common/cuda/common_cuda.h"
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#include "opal/mca/mpool/base/base.h"
|
|
|
|
#include "opal/mca/mpool/sm/mpool_sm.h"
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
#if OPAL_ENABLE_FT_CR == 1
|
|
|
|
#include "opal/mca/crs/base/base.h"
|
|
|
|
#include "opal/util/basename.h"
|
|
|
|
#include "orte/mca/sstore/sstore.h"
|
2014-07-27 02:21:08 +04:00
|
|
|
#include "opal/runtime/opal_cr.h"
|
2012-02-24 06:13:33 +04:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#include "btl_smcuda.h"
|
|
|
|
#include "btl_smcuda_endpoint.h"
|
|
|
|
#include "btl_smcuda_frag.h"
|
|
|
|
#include "btl_smcuda_fifo.h"
|
|
|
|
|
|
|
|
mca_btl_smcuda_t mca_btl_smcuda = {
|
2014-04-14 23:29:26 +04:00
|
|
|
.super = {
|
|
|
|
.btl_component = &mca_btl_smcuda_component.super,
|
|
|
|
.btl_add_procs = mca_btl_smcuda_add_procs,
|
|
|
|
.btl_del_procs = mca_btl_smcuda_del_procs,
|
|
|
|
.btl_finalize = mca_btl_smcuda_finalize,
|
|
|
|
.btl_alloc = mca_btl_smcuda_alloc,
|
|
|
|
.btl_free = mca_btl_smcuda_free,
|
|
|
|
.btl_prepare_src = mca_btl_smcuda_prepare_src,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT || OPAL_BTL_SM_HAVE_KNEM || OPAL_BTL_SM_HAVE_CMA
|
2014-04-14 23:29:26 +04:00
|
|
|
.btl_prepare_dst = mca_btl_smcuda_prepare_dst,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT || OPAL_BTL_SM_HAVE_KNEM || OPAL_BTL_SM_HAVE_CMA */
|
2014-04-14 23:29:26 +04:00
|
|
|
.btl_send = mca_btl_smcuda_send,
|
|
|
|
.btl_sendi = mca_btl_smcuda_sendi,
|
|
|
|
.btl_dump = mca_btl_smcuda_dump,
|
2014-04-15 23:50:54 +04:00
|
|
|
.btl_register_error = mca_btl_smcuda_register_error_cb,
|
2014-04-14 23:29:26 +04:00
|
|
|
.btl_ft_event = mca_btl_smcuda_ft_event
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2013-08-22 01:00:09 +04:00
|
|
|
static void mca_btl_smcuda_send_cuda_ipc_request(struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint);
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
/*
|
|
|
|
* calculate offset of an address from the beginning of a shared memory segment
|
|
|
|
*/
|
|
|
|
#define ADDR2OFFSET(ADDR, BASE) ((char*)(ADDR) - (char*)(BASE))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* calculate an absolute address in a local address space given an offset and
|
|
|
|
* a base address of a shared memory segment
|
|
|
|
*/
|
|
|
|
#define OFFSET2ADDR(OFFSET, BASE) ((ptrdiff_t)(OFFSET) + (char*)(BASE))
|
|
|
|
|
|
|
|
static void *mpool_calloc(size_t nmemb, size_t size)
|
|
|
|
{
|
|
|
|
void *buf;
|
|
|
|
size_t bsize = nmemb * size;
|
|
|
|
mca_mpool_base_module_t *mpool = mca_btl_smcuda_component.sm_mpool;
|
|
|
|
|
|
|
|
buf = mpool->mpool_alloc(mpool, bsize, opal_cache_line_size, 0, NULL);
|
|
|
|
|
|
|
|
if (NULL == buf)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
memset(buf, 0, bsize);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
static int
|
|
|
|
setup_mpool_base_resources(mca_btl_smcuda_component_t *comp_ptr,
|
|
|
|
mca_mpool_base_resources_t *out_res)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int rc = OPAL_SUCCESS;
|
2013-01-14 18:42:19 +04:00
|
|
|
int fd = -1;
|
|
|
|
ssize_t bread = 0;
|
|
|
|
|
|
|
|
if (-1 == (fd = open(comp_ptr->sm_mpool_rndv_file_name, O_RDONLY))) {
|
|
|
|
int err = errno;
|
2013-02-13 01:10:11 +04:00
|
|
|
opal_show_help("help-mpi-btl-smcuda.txt", "sys call fail", true,
|
2013-01-14 18:42:19 +04:00
|
|
|
"open(2)", strerror(err), err);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERR_IN_ERRNO;
|
2013-01-14 18:42:19 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if ((ssize_t)sizeof(opal_shmem_ds_t) != (bread =
|
|
|
|
read(fd, &out_res->bs_meta_buf, sizeof(opal_shmem_ds_t)))) {
|
|
|
|
opal_output(0, "setup_mpool_base_resources: "
|
|
|
|
"Read inconsistency -- read: %lu, but expected: %lu!\n",
|
|
|
|
(unsigned long)bread,
|
|
|
|
(unsigned long)sizeof(opal_shmem_ds_t));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2013-01-14 18:42:19 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if ((ssize_t)sizeof(out_res->size) != (bread =
|
|
|
|
read(fd, &out_res->size, sizeof(size_t)))) {
|
|
|
|
opal_output(0, "setup_mpool_base_resources: "
|
|
|
|
"Read inconsistency -- read: %lu, but expected: %lu!\n",
|
|
|
|
(unsigned long)bread,
|
|
|
|
(unsigned long)sizeof(opal_shmem_ds_t));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2013-01-14 18:42:19 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (-1 != fd) {
|
|
|
|
(void)close(fd);
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
sm_segment_attach(mca_btl_smcuda_component_t *comp_ptr)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int rc = OPAL_SUCCESS;
|
2013-01-14 18:42:19 +04:00
|
|
|
int fd = -1;
|
|
|
|
ssize_t bread = 0;
|
|
|
|
opal_shmem_ds_t *tmp_shmem_ds = calloc(1, sizeof(*tmp_shmem_ds));
|
|
|
|
|
|
|
|
if (NULL == tmp_shmem_ds) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2013-01-14 18:42:19 +04:00
|
|
|
}
|
|
|
|
if (-1 == (fd = open(comp_ptr->sm_rndv_file_name, O_RDONLY))) {
|
|
|
|
int err = errno;
|
2013-11-07 23:45:56 +04:00
|
|
|
opal_show_help("help-mpi-btl-smcuda.txt", "sys call fail", true,
|
2013-01-14 18:42:19 +04:00
|
|
|
"open(2)", strerror(err), err);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERR_IN_ERRNO;
|
2013-01-14 18:42:19 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if ((ssize_t)sizeof(opal_shmem_ds_t) != (bread =
|
|
|
|
read(fd, tmp_shmem_ds, sizeof(opal_shmem_ds_t)))) {
|
|
|
|
opal_output(0, "sm_segment_attach: "
|
|
|
|
"Read inconsistency -- read: %lu, but expected: %lu!\n",
|
|
|
|
(unsigned long)bread,
|
|
|
|
(unsigned long)sizeof(opal_shmem_ds_t));
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2013-01-14 18:42:19 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (NULL == (comp_ptr->sm_seg =
|
|
|
|
mca_common_sm_module_attach(tmp_shmem_ds,
|
|
|
|
sizeof(mca_common_sm_seg_header_t),
|
|
|
|
opal_cache_line_size))) {
|
|
|
|
/* don't have to detach here, because module_attach cleans up after
|
|
|
|
* itself on failure. */
|
|
|
|
opal_output(0, "sm_segment_attach: "
|
|
|
|
"mca_common_sm_module_attach failure!\n");
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
rc = OPAL_ERROR;
|
2013-01-14 18:42:19 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (-1 != fd) {
|
|
|
|
(void)close(fd);
|
|
|
|
}
|
|
|
|
if (tmp_shmem_ds) {
|
|
|
|
free(tmp_shmem_ds);
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
2012-05-15 19:32:33 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
static int
|
|
|
|
smcuda_btl_first_time_init(mca_btl_smcuda_t *smcuda_btl,
|
|
|
|
int32_t my_smp_rank,
|
|
|
|
int n)
|
2012-02-24 06:13:33 +04:00
|
|
|
{
|
2013-01-14 18:42:19 +04:00
|
|
|
size_t length, length_payload;
|
2012-02-24 06:13:33 +04:00
|
|
|
sm_fifo_t *my_fifos;
|
2013-01-14 18:42:19 +04:00
|
|
|
int my_mem_node, num_mem_nodes, i, rc;
|
|
|
|
mca_mpool_base_resources_t *res = NULL;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
mca_btl_smcuda_component_t* m = &mca_btl_smcuda_component;
|
|
|
|
|
|
|
|
/* Assume we don't have hwloc support and fill in dummy info */
|
2012-05-15 19:32:33 +04:00
|
|
|
mca_btl_smcuda_component.mem_node = my_mem_node = 0;
|
|
|
|
mca_btl_smcuda_component.num_mem_nodes = num_mem_nodes = 1;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
|
|
|
|
#if OPAL_HAVE_HWLOC
|
|
|
|
/* If we have hwloc support, then get accurate information */
|
|
|
|
if (NULL != opal_hwloc_topology) {
|
|
|
|
i = opal_hwloc_base_get_nbobjs_by_type(opal_hwloc_topology,
|
|
|
|
HWLOC_OBJ_NODE, 0,
|
|
|
|
OPAL_HWLOC_AVAILABLE);
|
|
|
|
|
|
|
|
/* If we find >0 NUMA nodes, then investigate further */
|
|
|
|
if (i > 0) {
|
2013-11-07 23:45:56 +04:00
|
|
|
int numa=0, w;
|
2013-03-27 00:09:49 +04:00
|
|
|
unsigned n_bound=0;
|
|
|
|
hwloc_cpuset_t avail;
|
|
|
|
hwloc_obj_t obj;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
|
|
|
|
/* JMS This tells me how many numa nodes are *available*,
|
|
|
|
but it's not how many are being used *by this job*.
|
|
|
|
Note that this is the value we've previously used (from
|
|
|
|
the previous carto-based implementation), but it really
|
|
|
|
should be improved to be how many NUMA nodes are being
|
|
|
|
used *in this job*. */
|
2013-04-18 19:34:16 +04:00
|
|
|
mca_btl_smcuda_component.num_mem_nodes = num_mem_nodes = i;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
|
2013-03-27 00:09:49 +04:00
|
|
|
/* if we are not bound, then there is nothing further to do */
|
2014-07-27 01:48:23 +04:00
|
|
|
if (NULL != opal_process_info.cpuset) {
|
2013-03-27 00:09:49 +04:00
|
|
|
/* count the number of NUMA nodes to which we are bound */
|
|
|
|
for (w=0; w < i; w++) {
|
|
|
|
if (NULL == (obj = opal_hwloc_base_get_obj_by_type(opal_hwloc_topology,
|
|
|
|
HWLOC_OBJ_NODE, 0, w,
|
|
|
|
OPAL_HWLOC_AVAILABLE))) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* get that NUMA node's available cpus */
|
|
|
|
avail = opal_hwloc_base_get_available_cpus(opal_hwloc_topology, obj);
|
|
|
|
/* see if we intersect */
|
|
|
|
if (hwloc_bitmap_intersects(avail, opal_hwloc_my_cpuset)) {
|
|
|
|
n_bound++;
|
|
|
|
numa = w;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
}
|
|
|
|
}
|
2013-03-27 00:09:49 +04:00
|
|
|
/* if we are located on more than one NUMA, or we didn't find
|
|
|
|
* a NUMA we are on, then not much we can do
|
|
|
|
*/
|
|
|
|
if (1 == n_bound) {
|
2013-04-18 19:34:16 +04:00
|
|
|
mca_btl_smcuda_component.mem_node = my_mem_node = numa;
|
2013-03-27 00:09:49 +04:00
|
|
|
} else {
|
2013-04-18 19:34:16 +04:00
|
|
|
mca_btl_smcuda_component.mem_node = my_mem_node = -1;
|
2013-03-27 00:09:49 +04:00
|
|
|
}
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
if (NULL == (res = calloc(1, sizeof(*res)))) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2013-01-14 18:42:19 +04:00
|
|
|
}
|
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
/* lookup shared memory pool */
|
2013-01-14 18:42:19 +04:00
|
|
|
mca_btl_smcuda_component.sm_mpools =
|
|
|
|
(mca_mpool_base_module_t **)calloc(num_mem_nodes,
|
|
|
|
sizeof(mca_mpool_base_module_t *));
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/* Disable memory binding, because each MPI process will claim pages in the
|
|
|
|
* mpool for their local NUMA node */
|
|
|
|
res->mem_node = -1;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = setup_mpool_base_resources(m, res))) {
|
2013-01-14 18:42:19 +04:00
|
|
|
free(res);
|
|
|
|
return rc;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
}
|
2013-01-14 18:42:19 +04:00
|
|
|
/* now that res is fully populated, create the thing */
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
mca_btl_smcuda_component.sm_mpools[0] =
|
|
|
|
mca_mpool_base_module_create(mca_btl_smcuda_component.sm_mpool_name,
|
2013-01-14 18:42:19 +04:00
|
|
|
smcuda_btl, res);
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
/* Sanity check to ensure that we found it */
|
|
|
|
if (NULL == mca_btl_smcuda_component.sm_mpools[0]) {
|
2013-01-14 18:42:19 +04:00
|
|
|
free(res);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
}
|
2012-02-24 06:13:33 +04:00
|
|
|
|
Per RFC, bring in the following changes:
* Remove paffinity, maffinity, and carto frameworks -- they've been
wholly replaced by hwloc.
* Move ompi_mpi_init() affinity-setting/checking code down to ORTE.
* Update sm, smcuda, wv, and openib components to no longer use carto.
Instead, use hwloc data. There are still optimizations possible in
the sm/smcuda BTLs (i.e., making multiple mpools). Also, the old
carto-based code found out how many NUMA nodes were ''available''
-- not how many were used ''in this job''. The new hwloc-using
code computes the same value -- it was not updated to calculate how
many NUMA nodes are used ''by this job.''
* Note that I cannot compile the smcuda and wv BTLs -- I ''think''
they're right, but they need to be verified by their owners.
* The openib component now does a bunch of stuff to figure out where
"near" OpenFabrics devices are. '''THIS IS A CHANGE IN DEFAULT
BEHAVIOR!!''' and still needs to be verified by OpenFabrics vendors
(I do not have a NUMA machine with an OpenFabrics device that is a
non-uniform distance from multiple different NUMA nodes).
* Completely rewrite the OMPI_Affinity_str() routine from the
"affinity" mpiext extension. This extension now understands
hyperthreads; the output format of it has changed a bit to reflect
this new information.
* Bunches of minor changes around the code base to update names/types
from maffinity/paffinity-based names to hwloc-based names.
* Add some helper functions into the hwloc base, mainly having to do
with the fact that we have the hwloc data reporting ''all''
topology information, but sometimes you really only want the
(online | available) data.
This commit was SVN r26391.
2012-05-07 18:52:54 +04:00
|
|
|
mca_btl_smcuda_component.sm_mpool = mca_btl_smcuda_component.sm_mpools[0];
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
mca_btl_smcuda_component.sm_mpool_base =
|
|
|
|
mca_btl_smcuda_component.sm_mpools[0]->mpool_base(mca_btl_smcuda_component.sm_mpools[0]);
|
|
|
|
|
|
|
|
/* create a list of peers */
|
|
|
|
mca_btl_smcuda_component.sm_peers = (struct mca_btl_base_endpoint_t**)
|
|
|
|
calloc(n, sizeof(struct mca_btl_base_endpoint_t*));
|
2012-05-15 19:32:33 +04:00
|
|
|
if (NULL == mca_btl_smcuda_component.sm_peers) {
|
2013-01-14 18:42:19 +04:00
|
|
|
free(res);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-05-15 19:32:33 +04:00
|
|
|
}
|
2013-04-18 19:34:16 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/* remember that node rank zero is already attached */
|
|
|
|
if (0 != my_smp_rank) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != (rc = sm_segment_attach(m))) {
|
2013-01-14 18:42:19 +04:00
|
|
|
free(res);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2014-12-17 22:21:13 +03:00
|
|
|
/* Register the entire shared memory region with the CUDA library which will
|
|
|
|
* force it to be pinned. This aproach was chosen as there is no way for this
|
|
|
|
* local process to know which parts of the memory are being utilized by a
|
|
|
|
* remote process. */
|
|
|
|
opal_output_verbose(10, opal_btl_base_framework.framework_output,
|
|
|
|
"btl:smcuda: CUDA cuMemHostRegister address=%p, size=%d",
|
|
|
|
mca_btl_smcuda_component.sm_mpool_base, (int)res->size);
|
|
|
|
mca_common_cuda_register(mca_btl_smcuda_component.sm_mpool_base, res->size, "smcuda");
|
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/* Create a local memory pool that sends handles to the remote
|
|
|
|
* side. Note that the res argument is not really used, but
|
|
|
|
* needed to satisfy function signature. */
|
|
|
|
smcuda_btl->super.btl_mpool = mca_mpool_base_module_create("gpusm",
|
|
|
|
smcuda_btl,
|
|
|
|
res);
|
|
|
|
if (NULL == smcuda_btl->super.btl_mpool) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-05-15 19:32:33 +04:00
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/* it is now safe to free the mpool resources */
|
|
|
|
free(res);
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* check to make sure number of local procs is within the
|
|
|
|
* specified limits */
|
|
|
|
if(mca_btl_smcuda_component.sm_max_procs > 0 &&
|
|
|
|
mca_btl_smcuda_component.num_smp_procs + n >
|
|
|
|
mca_btl_smcuda_component.sm_max_procs) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERROR;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
mca_btl_smcuda_component.shm_fifo = (volatile sm_fifo_t **)mca_btl_smcuda_component.sm_seg->module_data_addr;
|
|
|
|
mca_btl_smcuda_component.shm_bases = (char**)(mca_btl_smcuda_component.shm_fifo + n);
|
|
|
|
mca_btl_smcuda_component.shm_mem_nodes = (uint16_t*)(mca_btl_smcuda_component.shm_bases + n);
|
|
|
|
|
|
|
|
/* set the base of the shared memory segment */
|
|
|
|
mca_btl_smcuda_component.shm_bases[mca_btl_smcuda_component.my_smp_rank] =
|
|
|
|
(char*)mca_btl_smcuda_component.sm_mpool_base;
|
|
|
|
mca_btl_smcuda_component.shm_mem_nodes[mca_btl_smcuda_component.my_smp_rank] =
|
|
|
|
(uint16_t)my_mem_node;
|
|
|
|
|
|
|
|
/* initialize the array of fifo's "owned" by this process */
|
|
|
|
if(NULL == (my_fifos = (sm_fifo_t*)mpool_calloc(FIFO_MAP_NUM(n), sizeof(sm_fifo_t))))
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
mca_btl_smcuda_component.shm_fifo[mca_btl_smcuda_component.my_smp_rank] = my_fifos;
|
|
|
|
|
|
|
|
/* cache the pointer to the 2d fifo array. These addresses
|
|
|
|
* are valid in the current process space */
|
|
|
|
mca_btl_smcuda_component.fifo = (sm_fifo_t**)malloc(sizeof(sm_fifo_t*) * n);
|
|
|
|
|
|
|
|
if(NULL == mca_btl_smcuda_component.fifo)
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
mca_btl_smcuda_component.fifo[mca_btl_smcuda_component.my_smp_rank] = my_fifos;
|
|
|
|
|
|
|
|
mca_btl_smcuda_component.mem_nodes = (uint16_t *) malloc(sizeof(uint16_t) * n);
|
|
|
|
if(NULL == mca_btl_smcuda_component.mem_nodes)
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* initialize fragment descriptor free lists */
|
|
|
|
|
|
|
|
/* allocation will be for the fragment descriptor and payload buffer */
|
|
|
|
length = sizeof(mca_btl_smcuda_frag1_t);
|
|
|
|
length_payload =
|
|
|
|
sizeof(mca_btl_smcuda_hdr_t) + mca_btl_smcuda_component.eager_limit;
|
|
|
|
i = ompi_free_list_init_new(&mca_btl_smcuda_component.sm_frags_eager, length,
|
|
|
|
opal_cache_line_size, OBJ_CLASS(mca_btl_smcuda_frag1_t),
|
|
|
|
length_payload, opal_cache_line_size,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_num,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_max,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_inc,
|
|
|
|
mca_btl_smcuda_component.sm_mpool);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if ( OPAL_SUCCESS != i )
|
2012-02-24 06:13:33 +04:00
|
|
|
return i;
|
|
|
|
|
|
|
|
length = sizeof(mca_btl_smcuda_frag2_t);
|
|
|
|
length_payload =
|
|
|
|
sizeof(mca_btl_smcuda_hdr_t) + mca_btl_smcuda_component.max_frag_size;
|
|
|
|
i = ompi_free_list_init_new(&mca_btl_smcuda_component.sm_frags_max, length,
|
|
|
|
opal_cache_line_size, OBJ_CLASS(mca_btl_smcuda_frag2_t),
|
|
|
|
length_payload, opal_cache_line_size,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_num,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_max,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_inc,
|
|
|
|
mca_btl_smcuda_component.sm_mpool);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if ( OPAL_SUCCESS != i )
|
2012-02-24 06:13:33 +04:00
|
|
|
return i;
|
|
|
|
|
|
|
|
i = ompi_free_list_init_new(&mca_btl_smcuda_component.sm_frags_user,
|
|
|
|
sizeof(mca_btl_smcuda_user_t),
|
|
|
|
opal_cache_line_size, OBJ_CLASS(mca_btl_smcuda_user_t),
|
|
|
|
sizeof(mca_btl_smcuda_hdr_t), opal_cache_line_size,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_num,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_max,
|
|
|
|
mca_btl_smcuda_component.sm_free_list_inc,
|
|
|
|
mca_btl_smcuda_component.sm_mpool);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if ( OPAL_SUCCESS != i )
|
2012-02-24 06:13:33 +04:00
|
|
|
return i;
|
|
|
|
|
|
|
|
mca_btl_smcuda_component.num_outstanding_frags = 0;
|
|
|
|
|
|
|
|
mca_btl_smcuda_component.num_pending_sends = 0;
|
|
|
|
i = opal_free_list_init(&mca_btl_smcuda_component.pending_send_fl,
|
|
|
|
sizeof(btl_smcuda_pending_send_item_t),
|
|
|
|
OBJ_CLASS(opal_free_list_item_t),
|
|
|
|
16, -1, 32);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if ( OPAL_SUCCESS != i )
|
2012-02-24 06:13:33 +04:00
|
|
|
return i;
|
|
|
|
|
|
|
|
/* set flag indicating btl has been inited */
|
|
|
|
smcuda_btl->btl_inited = true;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct mca_btl_base_endpoint_t *
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
create_sm_endpoint(int local_proc, struct opal_proc_t *proc)
|
2012-02-24 06:13:33 +04:00
|
|
|
{
|
|
|
|
struct mca_btl_base_endpoint_t *ep;
|
2014-06-26 00:43:28 +04:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_ENABLE_PROGRESS_THREADS == 1
|
2012-02-24 06:13:33 +04:00
|
|
|
char path[PATH_MAX];
|
2014-06-26 00:43:28 +04:00
|
|
|
#endif
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
ep = (struct mca_btl_base_endpoint_t*)
|
|
|
|
malloc(sizeof(struct mca_btl_base_endpoint_t));
|
|
|
|
if(NULL == ep)
|
|
|
|
return NULL;
|
|
|
|
ep->peer_smp_rank = local_proc + mca_btl_smcuda_component.num_smp_procs;
|
|
|
|
|
|
|
|
OBJ_CONSTRUCT(&ep->pending_sends, opal_list_t);
|
|
|
|
OBJ_CONSTRUCT(&ep->endpoint_lock, opal_mutex_t);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
#if OPAL_ENABLE_PROGRESS_THREADS == 1
|
2012-02-24 06:13:33 +04:00
|
|
|
sprintf(path, "%s"OPAL_PATH_SEP"sm_fifo.%lu",
|
2014-07-27 01:48:23 +04:00
|
|
|
opal_process_info.job_session_dir,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
(unsigned long)proc->proc_name);
|
2012-02-24 06:13:33 +04:00
|
|
|
ep->fifo_fd = open(path, O_WRONLY);
|
|
|
|
if(ep->fifo_fd < 0) {
|
|
|
|
opal_output(0, "mca_btl_smcuda_add_procs: open(%s) failed with errno=%d\n",
|
|
|
|
path, errno);
|
|
|
|
free(ep);
|
|
|
|
return NULL;
|
|
|
|
}
|
2014-06-26 00:43:28 +04:00
|
|
|
#endif
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2012-02-24 06:13:33 +04:00
|
|
|
{
|
|
|
|
mca_mpool_base_resources_t resources; /* unused, but needed */
|
|
|
|
|
|
|
|
/* Create a remote memory pool on the endpoint. Note that the resources
|
|
|
|
* argument is just to satisfy the function signature. The rcuda mpool
|
|
|
|
* actually takes care of filling in the resources. */
|
|
|
|
ep->mpool = mca_mpool_base_module_create("rgpusm",
|
|
|
|
NULL,
|
|
|
|
&resources);
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
return ep;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mca_btl_smcuda_add_procs(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t **procs,
|
2012-02-24 06:13:33 +04:00
|
|
|
struct mca_btl_base_endpoint_t **peers,
|
|
|
|
opal_bitmap_t* reachability)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
int return_code = OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
int32_t n_local_procs = 0, proc, j, my_smp_rank = -1;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
const opal_proc_t* my_proc; /* pointer to caller's proc structure */
|
2012-02-24 06:13:33 +04:00
|
|
|
mca_btl_smcuda_t *smcuda_btl;
|
|
|
|
bool have_connected_peer = false;
|
|
|
|
char **bases;
|
2013-01-14 18:42:19 +04:00
|
|
|
/* for easy access to the mpool_sm_module */
|
|
|
|
mca_mpool_sm_module_t *sm_mpool_modp = NULL;
|
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
/* initializion */
|
|
|
|
|
|
|
|
smcuda_btl = (mca_btl_smcuda_t *)btl;
|
|
|
|
|
|
|
|
/* get pointer to my proc structure */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(NULL == (my_proc = opal_proc_local_get()))
|
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* Get unique host identifier for each process in the list,
|
|
|
|
* and idetify procs that are on this host. Add procs on this
|
|
|
|
* host to shared memory reachbility list. Also, get number
|
|
|
|
* of local procs in the procs list. */
|
2013-01-14 18:42:19 +04:00
|
|
|
for (proc = 0; proc < (int32_t)nprocs; proc++) {
|
2012-02-24 06:13:33 +04:00
|
|
|
/* check to see if this proc can be reached via shmem (i.e.,
|
|
|
|
if they're on my local host and in my job) */
|
2014-11-12 04:00:42 +03:00
|
|
|
if (procs[proc]->proc_name.jobid != my_proc->proc_name.jobid ||
|
2012-02-24 06:13:33 +04:00
|
|
|
!OPAL_PROC_ON_LOCAL_NODE(procs[proc]->proc_flags)) {
|
|
|
|
peers[proc] = NULL;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* check to see if this is me */
|
|
|
|
if(my_proc == procs[proc]) {
|
|
|
|
my_smp_rank = mca_btl_smcuda_component.my_smp_rank = n_local_procs++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we have someone to talk to */
|
|
|
|
have_connected_peer = true;
|
|
|
|
|
|
|
|
if(!(peers[proc] = create_sm_endpoint(n_local_procs, procs[proc]))) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return_code = OPAL_ERROR;
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
peers[proc]->proc_opal = procs[proc];
|
2013-08-22 01:00:09 +04:00
|
|
|
peers[proc]->ipcstate = IPC_INIT;
|
|
|
|
peers[proc]->ipctries = 0;
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
n_local_procs++;
|
|
|
|
|
|
|
|
/* add this proc to shared memory accessibility list */
|
|
|
|
return_code = opal_bitmap_set_bit(reachability, proc);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(OPAL_SUCCESS != return_code)
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* jump out if there's not someone we can talk to */
|
|
|
|
if (!have_connected_peer)
|
|
|
|
goto CLEANUP;
|
|
|
|
|
|
|
|
/* make sure that my_smp_rank has been defined */
|
2013-01-14 18:42:19 +04:00
|
|
|
if (-1 == my_smp_rank) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return_code = OPAL_ERROR;
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!smcuda_btl->btl_inited) {
|
|
|
|
return_code =
|
2013-01-14 18:42:19 +04:00
|
|
|
smcuda_btl_first_time_init(smcuda_btl, my_smp_rank,
|
|
|
|
mca_btl_smcuda_component.sm_max_procs);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (return_code != OPAL_SUCCESS) {
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
2013-01-14 18:42:19 +04:00
|
|
|
}
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* set local proc's smp rank in the peers structure for
|
|
|
|
* rapid access and calculate reachability */
|
|
|
|
for(proc = 0; proc < (int32_t)nprocs; proc++) {
|
|
|
|
if(NULL == peers[proc])
|
|
|
|
continue;
|
|
|
|
mca_btl_smcuda_component.sm_peers[peers[proc]->peer_smp_rank] = peers[proc];
|
|
|
|
peers[proc]->my_smp_rank = my_smp_rank;
|
|
|
|
}
|
|
|
|
|
|
|
|
bases = mca_btl_smcuda_component.shm_bases;
|
2013-01-14 18:42:19 +04:00
|
|
|
sm_mpool_modp = (mca_mpool_sm_module_t *)mca_btl_smcuda_component.sm_mpool;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* initialize own FIFOs */
|
|
|
|
/*
|
|
|
|
* The receiver initializes all its FIFOs. All components will
|
|
|
|
* be allocated near the receiver. Nothing will be local to
|
|
|
|
* "the sender" since there will be many senders.
|
|
|
|
*/
|
|
|
|
for(j = mca_btl_smcuda_component.num_smp_procs;
|
|
|
|
j < mca_btl_smcuda_component.num_smp_procs + FIFO_MAP_NUM(n_local_procs); j++) {
|
|
|
|
|
|
|
|
return_code = sm_fifo_init( mca_btl_smcuda_component.fifo_size,
|
|
|
|
mca_btl_smcuda_component.sm_mpool,
|
|
|
|
&mca_btl_smcuda_component.fifo[my_smp_rank][j],
|
|
|
|
mca_btl_smcuda_component.fifo_lazy_free);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if(return_code != OPAL_SUCCESS)
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
opal_atomic_wmb();
|
|
|
|
|
|
|
|
/* Sync with other local procs. Force the FIFO initialization to always
|
|
|
|
* happens before the readers access it.
|
|
|
|
*/
|
2014-10-16 05:47:32 +04:00
|
|
|
(void)opal_atomic_add_32(&mca_btl_smcuda_component.sm_seg->module_seg->seg_inited, 1);
|
2012-02-24 06:13:33 +04:00
|
|
|
while( n_local_procs >
|
|
|
|
mca_btl_smcuda_component.sm_seg->module_seg->seg_inited) {
|
|
|
|
opal_progress();
|
|
|
|
opal_atomic_rmb();
|
|
|
|
}
|
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/* it is now safe to unlink the shared memory segment. only one process
|
|
|
|
* needs to do this, so just let smp rank zero take care of it. */
|
|
|
|
if (0 == my_smp_rank) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
2013-01-14 18:42:19 +04:00
|
|
|
mca_common_sm_module_unlink(mca_btl_smcuda_component.sm_seg)) {
|
|
|
|
/* it is "okay" if this fails at this point. we have gone this far,
|
|
|
|
* so just warn about the failure and continue. this is probably
|
|
|
|
* only triggered by a programming error. */
|
|
|
|
opal_output(0, "WARNING: common_sm_module_unlink failed.\n");
|
|
|
|
}
|
|
|
|
/* SKG - another abstraction violation here, but I don't want to add
|
|
|
|
* extra code in the sm mpool for further synchronization. */
|
|
|
|
|
|
|
|
/* at this point, all processes have attached to the mpool segment. so
|
|
|
|
* it is safe to unlink it here. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS !=
|
2013-01-14 18:42:19 +04:00
|
|
|
mca_common_sm_module_unlink(sm_mpool_modp->sm_common_module)) {
|
|
|
|
opal_output(0, "WARNING: common_sm_module_unlink failed.\n");
|
|
|
|
}
|
|
|
|
if (-1 == unlink(mca_btl_smcuda_component.sm_mpool_rndv_file_name)) {
|
|
|
|
opal_output(0, "WARNING: %s unlink failed.\n",
|
|
|
|
mca_btl_smcuda_component.sm_mpool_rndv_file_name);
|
|
|
|
}
|
|
|
|
if (-1 == unlink(mca_btl_smcuda_component.sm_rndv_file_name)) {
|
|
|
|
opal_output(0, "WARNING: %s unlink failed.\n",
|
|
|
|
mca_btl_smcuda_component.sm_rndv_file_name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* free up some space used by the name buffers */
|
|
|
|
free(mca_btl_smcuda_component.sm_mpool_ctl_file_name);
|
|
|
|
free(mca_btl_smcuda_component.sm_mpool_rndv_file_name);
|
|
|
|
free(mca_btl_smcuda_component.sm_ctl_file_name);
|
|
|
|
free(mca_btl_smcuda_component.sm_rndv_file_name);
|
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
/* coordinate with other processes */
|
|
|
|
for(j = mca_btl_smcuda_component.num_smp_procs;
|
|
|
|
j < mca_btl_smcuda_component.num_smp_procs + n_local_procs; j++) {
|
|
|
|
ptrdiff_t diff;
|
|
|
|
|
|
|
|
/* spin until this element is allocated */
|
|
|
|
/* doesn't really wait for that process... FIFO might be allocated, but not initialized */
|
|
|
|
opal_atomic_rmb();
|
|
|
|
while(NULL == mca_btl_smcuda_component.shm_fifo[j]) {
|
|
|
|
opal_progress();
|
|
|
|
opal_atomic_rmb();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Calculate the difference as (my_base - their_base) */
|
|
|
|
diff = ADDR2OFFSET(bases[my_smp_rank], bases[j]);
|
|
|
|
|
|
|
|
/* store local address of remote fifos */
|
|
|
|
mca_btl_smcuda_component.fifo[j] =
|
|
|
|
(sm_fifo_t*)OFFSET2ADDR(diff, mca_btl_smcuda_component.shm_fifo[j]);
|
|
|
|
|
|
|
|
/* cache local copy of peer memory node number */
|
|
|
|
mca_btl_smcuda_component.mem_nodes[j] = mca_btl_smcuda_component.shm_mem_nodes[j];
|
|
|
|
}
|
|
|
|
|
|
|
|
/* update the local smp process count */
|
|
|
|
mca_btl_smcuda_component.num_smp_procs += n_local_procs;
|
|
|
|
|
|
|
|
/* make sure we have enough eager fragmnents for each process */
|
2013-07-11 21:06:14 +04:00
|
|
|
return_code = ompi_free_list_resize_mt(&mca_btl_smcuda_component.sm_frags_eager,
|
|
|
|
mca_btl_smcuda_component.num_smp_procs * 2);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != return_code)
|
2012-02-24 06:13:33 +04:00
|
|
|
goto CLEANUP;
|
|
|
|
|
|
|
|
CLEANUP:
|
|
|
|
return return_code;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mca_btl_smcuda_del_procs(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
size_t nprocs,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
struct opal_proc_t **procs,
|
2012-02-24 06:13:33 +04:00
|
|
|
struct mca_btl_base_endpoint_t **peers)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* MCA->BTL Clean up any resources held by BTL module
|
|
|
|
* before the module is unloaded.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module.
|
|
|
|
*
|
|
|
|
* Prior to unloading a BTL module, the MCA framework will call
|
|
|
|
* the BTL finalize method of the module. Any resources held by
|
|
|
|
* the BTL should be released and if required the memory corresponding
|
|
|
|
* to the BTL module freed.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
int mca_btl_smcuda_finalize(struct mca_btl_base_module_t* btl)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Register callback function for error handling..
|
|
|
|
*/
|
|
|
|
int mca_btl_smcuda_register_error_cb(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_module_error_cb_fn_t cbfunc)
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_t *smcuda_btl = (mca_btl_smcuda_t *)btl;
|
|
|
|
smcuda_btl->error_cb = cbfunc;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Allocate a segment.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param size (IN) Request segment size.
|
|
|
|
*/
|
|
|
|
extern mca_btl_base_descriptor_t* mca_btl_smcuda_alloc(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
uint8_t order,
|
|
|
|
size_t size,
|
|
|
|
uint32_t flags)
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_frag_t* frag = NULL;
|
|
|
|
if(size <= mca_btl_smcuda_component.eager_limit) {
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_EAGER(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
} else if (size <= mca_btl_smcuda_component.max_frag_size) {
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_MAX(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (OPAL_LIKELY(frag != NULL)) {
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = size;
|
2012-02-24 06:13:33 +04:00
|
|
|
frag->base.des_flags = flags;
|
|
|
|
}
|
|
|
|
return (mca_btl_base_descriptor_t*)frag;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Return a segment allocated by this BTL.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param segment (IN) Allocated segment.
|
|
|
|
*/
|
|
|
|
extern int mca_btl_smcuda_free(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
mca_btl_base_descriptor_t* des)
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_frag_t* frag = (mca_btl_smcuda_frag_t*)des;
|
|
|
|
MCA_BTL_SMCUDA_FRAG_RETURN(frag);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Pack data
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
*/
|
|
|
|
struct mca_btl_base_descriptor_t* mca_btl_smcuda_prepare_src(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
mca_mpool_base_registration_t* registration,
|
|
|
|
struct opal_convertor_t* convertor,
|
|
|
|
uint8_t order,
|
|
|
|
size_t reserve,
|
|
|
|
size_t* size,
|
|
|
|
uint32_t flags)
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_frag_t* frag;
|
|
|
|
struct iovec iov;
|
|
|
|
uint32_t iov_count = 1;
|
|
|
|
size_t max_data = *size;
|
|
|
|
int rc;
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2012-02-24 06:13:33 +04:00
|
|
|
if (0 != reserve) {
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
if ( reserve + max_data <= mca_btl_smcuda_component.eager_limit ) {
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_EAGER(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
} else {
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_MAX(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
if( OPAL_UNLIKELY(NULL == frag) ) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if( OPAL_UNLIKELY(reserve + max_data > frag->size) ) {
|
|
|
|
max_data = frag->size - reserve;
|
|
|
|
}
|
|
|
|
iov.iov_len = max_data;
|
|
|
|
iov.iov_base =
|
2012-07-09 22:32:39 +04:00
|
|
|
(IOVBASE_TYPE*)(((unsigned char*)(frag->segment.base.seg_addr.pval)) + reserve);
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
rc = opal_convertor_pack(convertor, &iov, &iov_count, &max_data );
|
|
|
|
if( OPAL_UNLIKELY(rc < 0) ) {
|
|
|
|
MCA_BTL_SMCUDA_FRAG_RETURN(frag);
|
|
|
|
return NULL;
|
|
|
|
}
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = reserve + max_data;
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2012-02-24 06:13:33 +04:00
|
|
|
} else {
|
|
|
|
/* Normally, we are here because we have a GPU buffer and we are preparing
|
|
|
|
* to send it. However, we can also be there because we have received a
|
|
|
|
* PUT message because we are trying to send a host buffer. Therefore,
|
|
|
|
* we need to again check to make sure buffer is GPU. If not, then return
|
|
|
|
* NULL. We can just check the convertor since we have that. */
|
|
|
|
if (!(convertor->flags & CONVERTOR_CUDA)) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_USER(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
if( OPAL_UNLIKELY(NULL == frag) ) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
iov.iov_len = max_data;
|
|
|
|
iov.iov_base = NULL;
|
|
|
|
rc = opal_convertor_pack(convertor, &iov, &iov_count, &max_data);
|
|
|
|
if( OPAL_UNLIKELY(rc < 0) ) {
|
|
|
|
MCA_BTL_SMCUDA_FRAG_RETURN(frag);
|
|
|
|
return NULL;
|
|
|
|
}
|
2012-07-14 01:19:16 +04:00
|
|
|
frag->segment.base.seg_addr.lval = (uint64_t)(uintptr_t) iov.iov_base;
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = max_data;
|
|
|
|
memcpy(frag->segment.key, ((mca_mpool_common_cuda_reg_t *)registration)->memHandle,
|
2012-02-24 06:13:33 +04:00
|
|
|
sizeof(((mca_mpool_common_cuda_reg_t *)registration)->memHandle) +
|
|
|
|
sizeof(((mca_mpool_common_cuda_reg_t *)registration)->evtHandle));
|
|
|
|
frag->segment.memh_seg_addr.pval = registration->base;
|
|
|
|
frag->segment.memh_seg_len = registration->bound - registration->base + 1;
|
|
|
|
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2014-11-20 09:22:43 +03:00
|
|
|
frag->base.des_local = &(frag->segment.base);
|
|
|
|
frag->base.des_local_count = 1;
|
2012-02-24 06:13:33 +04:00
|
|
|
frag->base.order = MCA_BTL_NO_ORDER;
|
2014-07-10 20:31:15 +04:00
|
|
|
frag->base.des_remote = NULL;
|
|
|
|
frag->base.des_remote_count = 0;
|
2012-02-24 06:13:33 +04:00
|
|
|
frag->base.des_flags = flags;
|
|
|
|
*size = max_data;
|
|
|
|
return &frag->base;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if 0
|
2013-01-14 18:42:19 +04:00
|
|
|
#define MCA_BTL_SMCUDA_TOUCH_DATA_TILL_CACHELINE_BOUNDARY(sm_frag) \
|
2012-02-24 06:13:33 +04:00
|
|
|
do { \
|
2012-06-21 21:09:12 +04:00
|
|
|
char* _memory = (char*)(sm_frag)->segment.base.seg_addr.pval + \
|
|
|
|
(sm_frag)->segment.base.seg_len; \
|
2012-02-24 06:13:33 +04:00
|
|
|
int* _intmem; \
|
|
|
|
size_t align = (intptr_t)_memory & 0xFUL; \
|
|
|
|
switch( align & 0x3 ) { \
|
|
|
|
case 3: *_memory = 0; _memory++; \
|
|
|
|
case 2: *_memory = 0; _memory++; \
|
|
|
|
case 1: *_memory = 0; _memory++; \
|
|
|
|
} \
|
|
|
|
align >>= 2; \
|
|
|
|
_intmem = (int*)_memory; \
|
|
|
|
switch( align ) { \
|
|
|
|
case 3: *_intmem = 0; _intmem++; \
|
|
|
|
case 2: *_intmem = 0; _intmem++; \
|
|
|
|
case 1: *_intmem = 0; _intmem++; \
|
|
|
|
} \
|
|
|
|
} while(0)
|
|
|
|
#else
|
|
|
|
#define MCA_BTL_SMCUDA_TOUCH_DATA_TILL_CACHELINE_BOUNDARY(sm_frag)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if 0
|
|
|
|
if( OPAL_LIKELY(align > 0) ) { \
|
|
|
|
align = 0xFUL - align; \
|
|
|
|
memset( _memory, 0, align ); \
|
|
|
|
} \
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Initiate an inline send to the peer. If failure then return a descriptor.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param peer (IN) BTL peer addressing
|
|
|
|
*/
|
|
|
|
int mca_btl_smcuda_sendi( struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct opal_convertor_t* convertor,
|
|
|
|
void* header,
|
|
|
|
size_t header_size,
|
|
|
|
size_t payload_size,
|
|
|
|
uint8_t order,
|
|
|
|
uint32_t flags,
|
|
|
|
mca_btl_base_tag_t tag,
|
|
|
|
mca_btl_base_descriptor_t** descriptor )
|
|
|
|
{
|
|
|
|
size_t length = (header_size + payload_size);
|
|
|
|
mca_btl_smcuda_frag_t* frag;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if ( mca_btl_smcuda_component.num_outstanding_frags * 2 > (int) mca_btl_smcuda_component.fifo_size ) {
|
|
|
|
mca_btl_smcuda_component_progress();
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2013-08-22 01:00:09 +04:00
|
|
|
/* Initiate setting up CUDA IPC support. */
|
2013-10-16 20:48:18 +04:00
|
|
|
if (mca_common_cuda_enabled && (IPC_INIT == endpoint->ipcstate) && mca_btl_smcuda_component.use_cuda_ipc) {
|
2013-08-22 01:00:09 +04:00
|
|
|
mca_btl_smcuda_send_cuda_ipc_request(btl, endpoint);
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2013-08-22 01:00:09 +04:00
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
/* this check should be unnecessary... turn into an assertion? */
|
|
|
|
if( length < mca_btl_smcuda_component.eager_limit ) {
|
|
|
|
|
|
|
|
/* allocate a fragment, giving up if we can't get one */
|
|
|
|
/* note that frag==NULL is equivalent to rc returning an error code */
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_EAGER(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
if( OPAL_UNLIKELY(NULL == frag) ) {
|
|
|
|
*descriptor = NULL;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_OUT_OF_RESOURCE;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* fill in fragment fields */
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = length;
|
2012-02-24 06:13:33 +04:00
|
|
|
frag->hdr->len = length;
|
|
|
|
assert( 0 == (flags & MCA_BTL_DES_SEND_ALWAYS_CALLBACK) );
|
|
|
|
frag->base.des_flags = flags | MCA_BTL_DES_FLAGS_BTL_OWNERSHIP; /* why do any flags matter here other than OWNERSHIP? */
|
|
|
|
frag->hdr->tag = tag;
|
|
|
|
frag->endpoint = endpoint;
|
|
|
|
|
|
|
|
/* write the match header (with MPI comm/tag/etc. info) */
|
2012-06-21 21:09:12 +04:00
|
|
|
memcpy( frag->segment.base.seg_addr.pval, header, header_size );
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* write the message data if there is any */
|
|
|
|
/*
|
|
|
|
We can add MEMCHECKER calls before and after the packing.
|
|
|
|
*/
|
|
|
|
if( payload_size ) {
|
|
|
|
size_t max_data;
|
|
|
|
struct iovec iov;
|
|
|
|
uint32_t iov_count;
|
|
|
|
/* pack the data into the supplied buffer */
|
2012-06-21 21:09:12 +04:00
|
|
|
iov.iov_base = (IOVBASE_TYPE*)((unsigned char*)frag->segment.base.seg_addr.pval + header_size);
|
2012-02-24 06:13:33 +04:00
|
|
|
iov.iov_len = max_data = payload_size;
|
|
|
|
iov_count = 1;
|
|
|
|
|
|
|
|
(void)opal_convertor_pack( convertor, &iov, &iov_count, &max_data);
|
|
|
|
|
|
|
|
assert(max_data == payload_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
MCA_BTL_SMCUDA_TOUCH_DATA_TILL_CACHELINE_BOUNDARY(frag);
|
|
|
|
|
|
|
|
/* write the fragment pointer to the FIFO */
|
|
|
|
/*
|
|
|
|
* Note that we don't care what the FIFO-write return code is. Even if
|
|
|
|
* the return code indicates failure, the write has still "completed" from
|
|
|
|
* our point of view: it has been posted to a "pending send" queue.
|
|
|
|
*/
|
|
|
|
OPAL_THREAD_ADD32(&mca_btl_smcuda_component.num_outstanding_frags, +1);
|
|
|
|
MCA_BTL_SMCUDA_FIFO_WRITE(endpoint, endpoint->my_smp_rank,
|
|
|
|
endpoint->peer_smp_rank, (void *) VIRTUAL2RELATIVE(frag->hdr), false, true, rc);
|
2013-11-07 23:45:56 +04:00
|
|
|
(void)rc; /* this is safe to ignore as the message is requeued till success */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* presumably, this code path will never get executed */
|
|
|
|
*descriptor = mca_btl_smcuda_alloc( btl, endpoint, order,
|
|
|
|
payload_size + header_size, flags);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_ERR_RESOURCE_BUSY;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Initiate a send to the peer.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param peer (IN) BTL peer addressing
|
|
|
|
*/
|
|
|
|
int mca_btl_smcuda_send( struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct mca_btl_base_descriptor_t* descriptor,
|
|
|
|
mca_btl_base_tag_t tag )
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_frag_t* frag = (mca_btl_smcuda_frag_t*)descriptor;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if ( mca_btl_smcuda_component.num_outstanding_frags * 2 > (int) mca_btl_smcuda_component.fifo_size ) {
|
|
|
|
mca_btl_smcuda_component_progress();
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2013-08-22 01:00:09 +04:00
|
|
|
/* Initiate setting up CUDA IPC support */
|
2013-10-16 20:48:18 +04:00
|
|
|
if (mca_common_cuda_enabled && (IPC_INIT == endpoint->ipcstate) && mca_btl_smcuda_component.use_cuda_ipc) {
|
2013-08-22 01:00:09 +04:00
|
|
|
mca_btl_smcuda_send_cuda_ipc_request(btl, endpoint);
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2013-08-22 01:00:09 +04:00
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
/* available header space */
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->hdr->len = frag->segment.base.seg_len;
|
2012-02-24 06:13:33 +04:00
|
|
|
/* type of message, pt-2-pt, one-sided, etc */
|
|
|
|
frag->hdr->tag = tag;
|
|
|
|
|
|
|
|
MCA_BTL_SMCUDA_TOUCH_DATA_TILL_CACHELINE_BOUNDARY(frag);
|
|
|
|
|
|
|
|
frag->endpoint = endpoint;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* post the descriptor in the queue - post with the relative
|
|
|
|
* address
|
|
|
|
*/
|
|
|
|
OPAL_THREAD_ADD32(&mca_btl_smcuda_component.num_outstanding_frags, +1);
|
|
|
|
MCA_BTL_SMCUDA_FIFO_WRITE(endpoint, endpoint->my_smp_rank,
|
|
|
|
endpoint->peer_smp_rank, (void *) VIRTUAL2RELATIVE(frag->hdr), false, true, rc);
|
|
|
|
if( OPAL_LIKELY(0 == rc) ) {
|
|
|
|
return 1; /* the data is completely gone */
|
|
|
|
}
|
|
|
|
frag->base.des_flags |= MCA_BTL_DES_SEND_ALWAYS_CALLBACK;
|
|
|
|
/* not yet gone, but pending. Let the upper level knows that
|
|
|
|
* the callback will be triggered when the data will be sent.
|
|
|
|
*/
|
|
|
|
return 0;
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2012-02-24 06:13:33 +04:00
|
|
|
struct mca_btl_base_descriptor_t* mca_btl_smcuda_prepare_dst(
|
|
|
|
struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
struct mca_mpool_base_registration_t* registration,
|
|
|
|
struct opal_convertor_t* convertor,
|
|
|
|
uint8_t order,
|
|
|
|
size_t reserve,
|
|
|
|
size_t* size,
|
|
|
|
uint32_t flags)
|
|
|
|
{
|
2012-07-14 01:19:16 +04:00
|
|
|
void *ptr;
|
2012-02-24 06:13:33 +04:00
|
|
|
mca_btl_smcuda_frag_t* frag;
|
|
|
|
|
|
|
|
/* Only support GPU buffers */
|
|
|
|
if (!(convertor->flags & CONVERTOR_CUDA)) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-07-04 12:34:37 +04:00
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_USER(frag);
|
2012-02-24 06:13:33 +04:00
|
|
|
if(OPAL_UNLIKELY(NULL == frag)) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-06-21 21:09:12 +04:00
|
|
|
frag->segment.base.seg_len = *size;
|
2012-07-14 01:19:16 +04:00
|
|
|
opal_convertor_get_current_pointer( convertor, &ptr );
|
|
|
|
frag->segment.base.seg_addr.lval = (uint64_t)(uintptr_t) ptr;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2014-07-10 20:31:15 +04:00
|
|
|
frag->base.des_remote = NULL;
|
|
|
|
frag->base.des_remote_count = 0;
|
2014-11-20 09:22:43 +03:00
|
|
|
frag->base.des_local = &frag->segment.base;
|
|
|
|
frag->base.des_local_count = 1;
|
2012-02-24 06:13:33 +04:00
|
|
|
frag->base.des_flags = flags;
|
|
|
|
return &frag->base;
|
|
|
|
}
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#if OPAL_CUDA_SUPPORT
|
2012-02-24 06:13:33 +04:00
|
|
|
int mca_btl_smcuda_get_cuda(struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* ep,
|
|
|
|
struct mca_btl_base_descriptor_t* descriptor)
|
|
|
|
{
|
2014-07-10 20:31:15 +04:00
|
|
|
mca_btl_smcuda_segment_t *src_seg = (mca_btl_smcuda_segment_t *) descriptor->des_remote;
|
2014-11-20 09:22:43 +03:00
|
|
|
mca_btl_smcuda_segment_t *dst_seg = (mca_btl_smcuda_segment_t *) descriptor->des_local;
|
2012-02-24 06:13:33 +04:00
|
|
|
mca_mpool_common_cuda_reg_t rget_reg;
|
|
|
|
mca_mpool_common_cuda_reg_t *reg_ptr = &rget_reg;
|
|
|
|
int btl_ownership;
|
|
|
|
int rc, done;
|
|
|
|
void *remote_memory_address;
|
|
|
|
size_t offset;
|
|
|
|
mca_btl_smcuda_frag_t* frag = (mca_btl_smcuda_frag_t*)descriptor;
|
|
|
|
|
|
|
|
/* Set to 0 for debugging since it is a list item but I am not
|
|
|
|
* intializing it properly and it is annoying to see all the
|
|
|
|
* garbage in the debugger. */
|
|
|
|
|
|
|
|
memset(&rget_reg, 0, sizeof(rget_reg));
|
2012-06-21 21:09:12 +04:00
|
|
|
memcpy(&rget_reg.memHandle, src_seg->key, sizeof(src_seg->key));
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
/* Open the memory handle to the remote memory. If it is cached, then
|
|
|
|
* we just retrieve it from cache and avoid a call to open the handle. That
|
|
|
|
* is taken care of in the memory pool. Note that we are searching for the
|
|
|
|
* memory based on the base address and size of the memory handle, not the
|
|
|
|
* remote memory which may lie somewhere in the middle. This is taken care of
|
|
|
|
* a few lines down. Note that we hand in the peer rank just for debugging
|
|
|
|
* support. */
|
2012-06-21 21:09:12 +04:00
|
|
|
rc = ep->mpool->mpool_register(ep->mpool, src_seg->memh_seg_addr.pval,
|
|
|
|
src_seg->memh_seg_len, ep->peer_smp_rank,
|
2012-02-24 06:13:33 +04:00
|
|
|
(mca_mpool_base_registration_t **)®_ptr);
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2012-02-24 06:13:33 +04:00
|
|
|
opal_output(0, "Failed to register remote memory, rc=%d", rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
frag->registration = (mca_mpool_base_registration_t *)reg_ptr;
|
|
|
|
frag->endpoint = ep;
|
|
|
|
|
|
|
|
/* The registration has given us back the memory block that this
|
|
|
|
* address lives in. However, the base address of the block may
|
|
|
|
* not equal the address that was used to retrieve the block.
|
|
|
|
* Therefore, compute the offset and add it to the address of the
|
|
|
|
* memory handle. */
|
2012-06-21 21:09:12 +04:00
|
|
|
offset = (unsigned char *)src_seg->base.seg_addr.lval - reg_ptr->base.base;
|
2012-02-24 06:13:33 +04:00
|
|
|
remote_memory_address = (unsigned char *)reg_ptr->base.alloc_base + offset;
|
|
|
|
if (0 != offset) {
|
|
|
|
opal_output(-1, "OFFSET=%d", (int)offset);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The remote side posted an IPC event to make sure we do not start our
|
|
|
|
* copy until IPC event completes. This is to ensure that the data being sent
|
|
|
|
* is available in the sender's GPU buffer. Therefore, do a stream synchronize
|
|
|
|
* on the IPC event that we received. Note that we pull it from
|
|
|
|
* rget_reg, not reg_ptr, as we do not cache the event. */
|
|
|
|
mca_common_wait_stream_synchronize(&rget_reg);
|
|
|
|
|
2012-07-14 01:19:16 +04:00
|
|
|
rc = mca_common_cuda_memcpy((void *)(uintptr_t) dst_seg->base.seg_addr.lval,
|
|
|
|
remote_memory_address, dst_seg->base.seg_len,
|
|
|
|
"mca_btl_smcuda_get", (mca_btl_base_descriptor_t *)frag,
|
|
|
|
&done);
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_SUCCESS != rc) {
|
2012-02-24 06:13:33 +04:00
|
|
|
/* Out of resources can be handled by upper layers. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
if (OPAL_ERR_OUT_OF_RESOURCE != rc) {
|
2012-02-24 06:13:33 +04:00
|
|
|
opal_output(0, "Failed to cuMemcpy GPU memory, rc=%d", rc);
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (OPAL_UNLIKELY(1 == done)) {
|
|
|
|
/* This should only be true when experimenting with synchronous copies. */
|
|
|
|
btl_ownership = (frag->base.des_flags & MCA_BTL_DES_FLAGS_BTL_OWNERSHIP);
|
|
|
|
if (0 != (MCA_BTL_DES_SEND_ALWAYS_CALLBACK & frag->base.des_flags)) {
|
|
|
|
frag->base.des_cbfunc(&mca_btl_smcuda.super,
|
|
|
|
frag->endpoint, &frag->base,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
OPAL_SUCCESS);
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (btl_ownership) {
|
|
|
|
mca_btl_smcuda_free(btl, (mca_btl_base_descriptor_t *)frag);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
|
|
|
|
}
|
2013-08-22 01:00:09 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Send a CUDA IPC request message to the peer. This indicates that this rank
|
|
|
|
* is interested in establishing CUDA IPC support between this rank and GPU
|
|
|
|
* and the remote rank and GPU. This is called when we do a send of some
|
|
|
|
* type.
|
|
|
|
*
|
|
|
|
* @param btl (IN) BTL module
|
|
|
|
* @param peer (IN) BTL peer addressing
|
|
|
|
*/
|
|
|
|
#define MAXTRIES 5
|
|
|
|
static void mca_btl_smcuda_send_cuda_ipc_request(struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint)
|
|
|
|
{
|
|
|
|
mca_btl_smcuda_frag_t* frag;
|
|
|
|
int rc, mydevnum, res;
|
|
|
|
ctrlhdr_t ctrlhdr;
|
|
|
|
|
|
|
|
/* We need to grab the lock when changing the state from IPC_INIT as multiple
|
|
|
|
* threads could be doing sends. */
|
|
|
|
OPAL_THREAD_LOCK(&endpoint->endpoint_lock);
|
|
|
|
if (endpoint->ipcstate != IPC_INIT) {
|
|
|
|
OPAL_THREAD_UNLOCK(&endpoint->endpoint_lock);
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
endpoint->ipctries++;
|
|
|
|
if (endpoint->ipctries > MAXTRIES) {
|
|
|
|
endpoint->ipcstate = IPC_BAD;
|
|
|
|
OPAL_THREAD_UNLOCK(&endpoint->endpoint_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
/* All is good. Set up state and continue. */
|
|
|
|
endpoint->ipcstate = IPC_SENT;
|
|
|
|
OPAL_THREAD_UNLOCK(&endpoint->endpoint_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
if ( mca_btl_smcuda_component.num_outstanding_frags * 2 > (int) mca_btl_smcuda_component.fifo_size ) {
|
|
|
|
mca_btl_smcuda_component_progress();
|
|
|
|
}
|
|
|
|
|
|
|
|
if (0 != (res = mca_common_cuda_get_device(&mydevnum))) {
|
|
|
|
opal_output(0, "Cannot determine device. IPC cannot be set.");
|
|
|
|
endpoint->ipcstate = IPC_BAD;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* allocate a fragment, giving up if we can't get one */
|
|
|
|
MCA_BTL_SMCUDA_FRAG_ALLOC_EAGER(frag);
|
|
|
|
if( OPAL_UNLIKELY(NULL == frag) ) {
|
|
|
|
endpoint->ipcstate = IPC_BAD;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill in fragment fields. */
|
|
|
|
frag->hdr->tag = MCA_BTL_TAG_SMCUDA;
|
|
|
|
frag->base.des_flags = MCA_BTL_DES_FLAGS_BTL_OWNERSHIP;
|
|
|
|
frag->endpoint = endpoint;
|
|
|
|
ctrlhdr.ctag = IPC_REQ;
|
|
|
|
ctrlhdr.cudev = mydevnum;
|
|
|
|
memcpy(frag->segment.base.seg_addr.pval, &ctrlhdr, sizeof(struct ctrlhdr_st));
|
|
|
|
|
|
|
|
MCA_BTL_SMCUDA_TOUCH_DATA_TILL_CACHELINE_BOUNDARY(frag);
|
|
|
|
/* write the fragment pointer to the FIFO */
|
|
|
|
/*
|
|
|
|
* Note that we don't care what the FIFO-write return code is. Even if
|
|
|
|
* the return code indicates failure, the write has still "completed" from
|
|
|
|
* our point of view: it has been posted to a "pending send" queue.
|
|
|
|
*/
|
|
|
|
OPAL_THREAD_ADD32(&mca_btl_smcuda_component.num_outstanding_frags, +1);
|
|
|
|
opal_output_verbose(10, mca_btl_smcuda_component.cuda_ipc_output,
|
|
|
|
"Sending CUDA IPC REQ (try=%d): myrank=%d, mydev=%d, peerrank=%d",
|
|
|
|
endpoint->ipctries,
|
|
|
|
mca_btl_smcuda_component.my_smp_rank,
|
|
|
|
mydevnum, endpoint->peer_smp_rank);
|
|
|
|
|
|
|
|
MCA_BTL_SMCUDA_FIFO_WRITE(endpoint, endpoint->my_smp_rank,
|
|
|
|
endpoint->peer_smp_rank, (void *) VIRTUAL2RELATIVE(frag->hdr), false, true, rc);
|
|
|
|
return;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2013-11-01 16:19:40 +04:00
|
|
|
#endif /* OPAL_CUDA_SUPPORT */
|
2012-02-24 06:13:33 +04:00
|
|
|
|
2013-01-14 18:42:19 +04:00
|
|
|
/**
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
void mca_btl_smcuda_dump(struct mca_btl_base_module_t* btl,
|
|
|
|
struct mca_btl_base_endpoint_t* endpoint,
|
|
|
|
int verbose)
|
|
|
|
{
|
|
|
|
opal_list_item_t *item;
|
|
|
|
mca_btl_smcuda_frag_t* frag;
|
|
|
|
|
|
|
|
mca_btl_base_err("BTL SM %p endpoint %p [smp_rank %d] [peer_rank %d]\n",
|
|
|
|
(void*) btl, (void*) endpoint,
|
|
|
|
endpoint->my_smp_rank, endpoint->peer_smp_rank);
|
|
|
|
if( NULL != endpoint ) {
|
|
|
|
for(item = opal_list_get_first(&endpoint->pending_sends);
|
|
|
|
item != opal_list_get_end(&endpoint->pending_sends);
|
|
|
|
item = opal_list_get_next(item)) {
|
|
|
|
frag = (mca_btl_smcuda_frag_t*)item;
|
|
|
|
mca_btl_base_err(" | frag %p size %lu (hdr frag %p len %lu rank %d tag %d)\n",
|
|
|
|
(void*) frag, frag->size, (void*) frag->hdr->frag,
|
|
|
|
frag->hdr->len, frag->hdr->my_smp_rank,
|
|
|
|
frag->hdr->tag);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-02-24 06:13:33 +04:00
|
|
|
#if OPAL_ENABLE_FT_CR == 0
|
|
|
|
int mca_btl_smcuda_ft_event(int state) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
int mca_btl_smcuda_ft_event(int state) {
|
|
|
|
/* Notify mpool */
|
|
|
|
if( NULL != mca_btl_smcuda_component.sm_mpool &&
|
|
|
|
NULL != mca_btl_smcuda_component.sm_mpool->mpool_ft_event) {
|
|
|
|
mca_btl_smcuda_component.sm_mpool->mpool_ft_event(state);
|
|
|
|
}
|
|
|
|
|
|
|
|
if(OPAL_CRS_CHECKPOINT == state) {
|
|
|
|
if( NULL != mca_btl_smcuda_component.sm_seg ) {
|
|
|
|
/* On restart we need the old file names to exist (not necessarily
|
|
|
|
* contain content) so the CRS component does not fail when searching
|
|
|
|
* for these old file handles. The restart procedure will make sure
|
|
|
|
* these files get cleaned up appropriately.
|
|
|
|
*/
|
|
|
|
orte_sstore.set_attr(orte_sstore_handle_current,
|
|
|
|
SSTORE_METADATA_LOCAL_TOUCH,
|
|
|
|
mca_btl_smcuda_component.sm_seg->shmem_ds.seg_name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if(OPAL_CRS_CONTINUE == state) {
|
2012-06-27 05:28:28 +04:00
|
|
|
if( orte_cr_continue_like_restart ) {
|
2012-02-24 06:13:33 +04:00
|
|
|
if( NULL != mca_btl_smcuda_component.sm_seg ) {
|
|
|
|
/* Add shared memory file */
|
|
|
|
opal_crs_base_cleanup_append(mca_btl_smcuda_component.sm_seg->shmem_ds.seg_name, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Clear this so we force the module to re-init the sm files */
|
|
|
|
mca_btl_smcuda_component.sm_mpool = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if(OPAL_CRS_RESTART == state ||
|
|
|
|
OPAL_CRS_RESTART_PRE == state) {
|
|
|
|
if( NULL != mca_btl_smcuda_component.sm_seg ) {
|
|
|
|
/* Add shared memory file */
|
|
|
|
opal_crs_base_cleanup_append(mca_btl_smcuda_component.sm_seg->shmem_ds.seg_name, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Clear this so we force the module to re-init the sm files */
|
|
|
|
mca_btl_smcuda_component.sm_mpool = NULL;
|
|
|
|
}
|
|
|
|
else if(OPAL_CRS_TERM == state ) {
|
|
|
|
;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
;
|
|
|
|
}
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 04:47:28 +04:00
|
|
|
return OPAL_SUCCESS;
|
2012-02-24 06:13:33 +04:00
|
|
|
}
|
|
|
|
#endif /* OPAL_ENABLE_FT_CR */
|