2015-11-02 12:07:08 -07:00
|
|
|
/* -*- Mode: C; c-basic-offset:4 ; indent-tabs-mode:nil -*- */
|
2010-06-09 16:58:52 +00:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
2014-08-08 18:30:27 +00:00
|
|
|
* Copyright (c) 2004-2014 The University of Tennessee and The University
|
2010-06-09 16:58:52 +00:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2010-08-23 16:04:13 +00:00
|
|
|
* Copyright (c) 2004-2009 High Performance Computing Center Stuttgart,
|
2010-06-09 16:58:52 +00:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
* Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
|
|
|
|
* Copyright (c) 2008-2010 Cisco Systems, Inc. All rights reserved.
|
2015-11-02 12:07:08 -07:00
|
|
|
* Copyright (c) 2010-2015 Los Alamos National Security, LLC.
|
2010-06-09 16:58:52 +00:00
|
|
|
* All rights reserved.
|
2014-07-29 01:24:27 +00:00
|
|
|
* Copyright (c) 2014 Intel, Inc. All rights reserved
|
2010-06-09 16:58:52 +00:00
|
|
|
* $COPYRIGHT$
|
2010-08-23 16:04:13 +00:00
|
|
|
*
|
2010-06-09 16:58:52 +00:00
|
|
|
* Additional copyrights may follow
|
2010-08-23 16:04:13 +00:00
|
|
|
*
|
2010-06-09 16:58:52 +00:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2012-01-05 00:11:59 +00:00
|
|
|
/* ASSUMING local process homogeneity with respect to all utilized shared memory
|
|
|
|
* facilities. that is, if one local process deems a particular shared memory
|
|
|
|
* facility acceptable, then ALL local processes should be able to utilize that
|
|
|
|
* facility. as it stands, this is an important point because one process
|
|
|
|
* dictates to all other local processes which common sm component will be
|
|
|
|
* selected based on its own, local run-time test.
|
|
|
|
*/
|
2010-06-09 16:58:52 +00:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
#include "opal_config.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
#include "opal/align.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
#include "opal/util/argv.h"
|
2013-02-12 21:10:11 +00:00
|
|
|
#include "opal/util/show_help.h"
|
2014-08-08 18:30:27 +00:00
|
|
|
#include "opal/util/error.h"
|
2014-07-29 01:24:27 +00:00
|
|
|
#include "opal/mca/shmem/base/base.h"
|
2010-09-22 22:05:07 +00:00
|
|
|
#if OPAL_ENABLE_FT_CR == 1
|
|
|
|
#include "opal/runtime/opal_cr.h"
|
|
|
|
#endif
|
2015-11-02 12:07:08 -07:00
|
|
|
#include "common_sm.h"
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
#include "opal/constants.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2015-11-02 12:07:08 -07:00
|
|
|
|
|
|
|
OBJ_CLASS_INSTANCE(mca_common_sm_module_t,opal_list_item_t,
|
|
|
|
NULL, NULL);
|
|
|
|
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/* static utility functions */
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
static mca_common_sm_module_t *
|
2012-07-23 19:38:13 +00:00
|
|
|
attach_and_init(opal_shmem_ds_t *shmem_bufp,
|
|
|
|
size_t size,
|
|
|
|
size_t size_ctl_structure,
|
2012-01-11 03:37:23 +00:00
|
|
|
size_t data_seg_alignment,
|
|
|
|
bool first_call)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2011-06-21 15:41:57 +00:00
|
|
|
mca_common_sm_module_t *map = NULL;
|
|
|
|
mca_common_sm_seg_header_t *seg = NULL;
|
|
|
|
unsigned char *addr = NULL;
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* attach to the specified segment. note that at this point, the contents of
|
|
|
|
* *shmem_bufp have already been initialized via opal_shmem_segment_create.
|
|
|
|
*/
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL == (seg = (mca_common_sm_seg_header_t *)
|
2012-07-23 19:38:13 +00:00
|
|
|
opal_shmem_segment_attach(shmem_bufp))) {
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
opal_atomic_rmb();
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL == (map = OBJ_NEW(mca_common_sm_module_t))) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2012-07-23 19:38:13 +00:00
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* copy meta information into common sm module
|
|
|
|
* from ====> to */
|
|
|
|
if (OPAL_SUCCESS != opal_shmem_ds_copy(shmem_bufp, &map->shmem_ds)) {
|
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
|
|
|
free(map);
|
|
|
|
return NULL;
|
|
|
|
}
|
2011-03-08 17:36:59 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* the first entry in the file is the control structure. the first
|
|
|
|
* entry in the control structure is an mca_common_sm_seg_header_t
|
2012-07-23 19:38:13 +00:00
|
|
|
* element.
|
2011-06-21 15:41:57 +00:00
|
|
|
*/
|
|
|
|
map->module_seg = seg;
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
addr = ((unsigned char *)seg) + size_ctl_structure;
|
|
|
|
/* if we have a data segment (i.e., if 0 != data_seg_alignment),
|
|
|
|
* then make it the first aligned address after the control
|
|
|
|
* structure. IF THIS HAPPENS, THIS IS A PROGRAMMING ERROR IN
|
|
|
|
* OPEN MPI!
|
|
|
|
*/
|
|
|
|
if (0 != data_seg_alignment) {
|
|
|
|
addr = OPAL_ALIGN_PTR(addr, data_seg_alignment, unsigned char *);
|
|
|
|
/* is addr past end of the shared memory segment? */
|
2012-07-23 19:38:13 +00:00
|
|
|
if ((unsigned char *)seg + shmem_bufp->seg_size < addr) {
|
2013-02-12 21:10:11 +00:00
|
|
|
opal_show_help("help-mpi-common-sm.txt", "mmap too small", 1,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
opal_proc_local_get()->proc_hostname,
|
2012-07-23 19:38:13 +00:00
|
|
|
(unsigned long)shmem_bufp->seg_size,
|
2011-06-21 15:41:57 +00:00
|
|
|
(unsigned long)size_ctl_structure,
|
|
|
|
(unsigned long)data_seg_alignment);
|
2012-07-23 19:38:13 +00:00
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
|
|
|
free(map);
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
|
|
|
}
|
2011-03-08 17:36:59 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
map->module_data_addr = addr;
|
|
|
|
map->module_seg_addr = (unsigned char *)seg;
|
2013-01-11 16:24:56 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* note that size is only used during the first call */
|
2012-01-11 03:37:23 +00:00
|
|
|
if (first_call) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* initialize some segment information */
|
|
|
|
size_t mem_offset = map->module_data_addr -
|
|
|
|
(unsigned char *)map->module_seg;
|
|
|
|
opal_atomic_init(&map->module_seg->seg_lock, OPAL_ATOMIC_UNLOCKED);
|
|
|
|
map->module_seg->seg_inited = 0;
|
2012-01-11 03:37:23 +00:00
|
|
|
map->module_seg->seg_num_procs_inited = 0;
|
2012-07-23 19:38:13 +00:00
|
|
|
map->module_seg->seg_offset = mem_offset;
|
|
|
|
map->module_seg->seg_size = size - mem_offset;
|
2012-01-11 03:37:23 +00:00
|
|
|
opal_atomic_wmb();
|
|
|
|
}
|
2012-07-23 19:38:13 +00:00
|
|
|
|
|
|
|
/* increment the number of processes that are attached to the segment. */
|
2011-11-28 23:41:19 +00:00
|
|
|
(void)opal_atomic_add_size_t(&map->module_seg->seg_num_procs_inited, 1);
|
2012-07-23 19:38:13 +00:00
|
|
|
|
|
|
|
/* commit the changes before we return */
|
2011-06-21 15:41:57 +00:00
|
|
|
opal_atomic_wmb();
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
return map;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2013-01-11 16:24:56 +00:00
|
|
|
/* api implementation */
|
2012-07-23 19:38:13 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
mca_common_sm_module_t *
|
2013-01-11 16:24:56 +00:00
|
|
|
mca_common_sm_module_create_and_attach(size_t size,
|
|
|
|
char *file_name,
|
|
|
|
size_t size_ctl_structure,
|
|
|
|
size_t data_seg_alignment)
|
2012-07-23 19:38:13 +00:00
|
|
|
{
|
|
|
|
mca_common_sm_module_t *map = NULL;
|
|
|
|
opal_shmem_ds_t *seg_meta = NULL;
|
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
if (NULL == (seg_meta = calloc(1, sizeof(*seg_meta)))) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* out of resources */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
if (OPAL_SUCCESS == opal_shmem_segment_create(seg_meta, file_name, size)) {
|
|
|
|
map = attach_and_init(seg_meta, size, size_ctl_structure,
|
|
|
|
data_seg_alignment, true);
|
|
|
|
}
|
2012-07-23 21:11:21 +00:00
|
|
|
/* at this point, seg_meta has been copied to the newly created
|
|
|
|
* shared memory segment, so we can free it */
|
|
|
|
if (seg_meta) {
|
|
|
|
free(seg_meta);
|
|
|
|
}
|
|
|
|
|
|
|
|
return map;
|
2012-07-23 19:38:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/**
|
|
|
|
* @return a pointer to the mca_common_sm_module_t associated with seg_meta if
|
|
|
|
* everything was okay, otherwise returns NULL.
|
|
|
|
*/
|
|
|
|
mca_common_sm_module_t *
|
|
|
|
mca_common_sm_module_attach(opal_shmem_ds_t *seg_meta,
|
|
|
|
size_t size_ctl_structure,
|
|
|
|
size_t data_seg_alignment)
|
|
|
|
{
|
|
|
|
/* notice that size is 0 here. it really doesn't matter because size WILL
|
|
|
|
* NOT be used because this is an attach (first_call is false). */
|
2013-01-11 16:24:56 +00:00
|
|
|
return attach_and_init(seg_meta, 0, size_ctl_structure,
|
|
|
|
data_seg_alignment, false);
|
|
|
|
}
|
2012-07-23 19:38:13 +00:00
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
int
|
|
|
|
mca_common_sm_module_unlink(mca_common_sm_module_t *modp)
|
|
|
|
{
|
|
|
|
if (NULL == modp) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERROR;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
|
|
|
if (OPAL_SUCCESS != opal_shmem_unlink(&modp->shmem_ds)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERROR;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_SUCCESS;
|
2012-07-23 19:38:13 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2013-01-11 16:24:56 +00:00
|
|
|
int
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
mca_common_sm_local_proc_reorder(opal_proc_t **procs,
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t num_procs,
|
|
|
|
size_t *out_num_local_procs)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t num_local_procs = 0;
|
2013-01-05 01:54:23 +00:00
|
|
|
bool found_lowest = false;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
opal_proc_t *temp_proc = NULL;
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t p;
|
2012-01-05 00:11:59 +00:00
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
if (NULL == out_num_local_procs || NULL == procs) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERR_BAD_PARAM;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
/* o reorder procs array to have all the local procs at the beginning.
|
|
|
|
* o look for the local proc with the lowest name.
|
|
|
|
* o determine the number of local procs.
|
|
|
|
* o ensure that procs[0] is the lowest named process.
|
2010-08-23 16:04:13 +00:00
|
|
|
*/
|
2011-06-21 15:41:57 +00:00
|
|
|
for (p = 0; p < num_procs; ++p) {
|
|
|
|
if (OPAL_PROC_ON_LOCAL_NODE(procs[p]->proc_flags)) {
|
2010-08-23 16:04:13 +00:00
|
|
|
/* if we don't have a lowest, save the first one */
|
2011-06-21 15:41:57 +00:00
|
|
|
if (!found_lowest) {
|
2010-08-23 16:04:13 +00:00
|
|
|
procs[0] = procs[p];
|
|
|
|
found_lowest = true;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
else {
|
2010-08-23 16:04:13 +00:00
|
|
|
/* save this proc */
|
|
|
|
procs[num_local_procs] = procs[p];
|
2011-06-21 15:41:57 +00:00
|
|
|
/* if we have a new lowest, swap it with position 0
|
2013-01-11 16:24:56 +00:00
|
|
|
* so that procs[0] is always the lowest named proc */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
if( 0 > opal_compare_proc(procs[p]->proc_name, procs[0]->proc_name) ) {
|
2010-08-23 16:04:13 +00:00
|
|
|
temp_proc = procs[0];
|
|
|
|
procs[0] = procs[p];
|
|
|
|
procs[num_local_procs] = temp_proc;
|
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
/* regardless of the comparisons above, we found
|
2010-08-23 16:04:13 +00:00
|
|
|
* another proc on the local node, so increment
|
|
|
|
*/
|
|
|
|
++num_local_procs;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
}
|
2013-01-11 16:24:56 +00:00
|
|
|
*out_num_local_procs = num_local_procs;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_SUCCESS;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/**
|
|
|
|
* allocate memory from a previously allocated shared memory
|
|
|
|
* block.
|
|
|
|
*
|
|
|
|
* @param size size of request, in bytes (IN)
|
|
|
|
*
|
|
|
|
* @retval addr virtual address
|
|
|
|
*/
|
2015-11-02 12:07:08 -07:00
|
|
|
void *mca_common_sm_seg_alloc (void *ctx, size_t *size)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2015-11-02 12:07:08 -07:00
|
|
|
mca_common_sm_module_t *sm_module = (mca_common_sm_module_t *) ctx;
|
|
|
|
mca_common_sm_seg_header_t *seg = sm_module->module_seg;
|
2011-06-21 15:41:57 +00:00
|
|
|
void *addr;
|
|
|
|
|
|
|
|
opal_atomic_lock(&seg->seg_lock);
|
|
|
|
if (seg->seg_offset + *size > seg->seg_size) {
|
|
|
|
addr = NULL;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
size_t fixup;
|
|
|
|
|
|
|
|
/* add base address to segment offset */
|
2015-11-02 12:07:08 -07:00
|
|
|
addr = sm_module->module_data_addr + seg->seg_offset;
|
2011-06-21 15:41:57 +00:00
|
|
|
seg->seg_offset += *size;
|
|
|
|
|
|
|
|
/* fix up seg_offset so next allocation is aligned on a
|
|
|
|
* sizeof(long) boundry. Do it here so that we don't have to
|
|
|
|
* check before checking remaining size in buffer
|
|
|
|
*/
|
|
|
|
if ((fixup = (seg->seg_offset & (sizeof(long) - 1))) > 0) {
|
|
|
|
seg->seg_offset += sizeof(long) - fixup;
|
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2015-11-02 12:07:08 -07:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
opal_atomic_unlock(&seg->seg_lock);
|
|
|
|
return addr;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2010-08-23 16:04:13 +00:00
|
|
|
int
|
2010-06-09 16:58:52 +00:00
|
|
|
mca_common_sm_fini(mca_common_sm_module_t *mca_common_sm_module)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
int rc = OPAL_SUCCESS;
|
2011-06-21 15:41:57 +00:00
|
|
|
|
|
|
|
if (NULL != mca_common_sm_module->module_seg) {
|
|
|
|
if (OPAL_SUCCESS !=
|
|
|
|
opal_shmem_segment_detach(&mca_common_sm_module->shmem_ds)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
rc = OPAL_ERROR;
|
2011-06-21 15:41:57 +00:00
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
return rc;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|