2010-06-09 16:58:52 +00:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The University of Tennessee and The University
|
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2010-08-23 16:04:13 +00:00
|
|
|
* Copyright (c) 2004-2009 High Performance Computing Center Stuttgart,
|
2010-06-09 16:58:52 +00:00
|
|
|
* University of Stuttgart. All rights reserved.
|
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
* Copyright (c) 2007 Sun Microsystems, Inc. All rights reserved.
|
|
|
|
* Copyright (c) 2008-2010 Cisco Systems, Inc. All rights reserved.
|
2013-01-11 16:24:56 +00:00
|
|
|
* Copyright (c) 2010-2013 Los Alamos National Security, LLC.
|
2010-06-09 16:58:52 +00:00
|
|
|
* All rights reserved.
|
|
|
|
* $COPYRIGHT$
|
2010-08-23 16:04:13 +00:00
|
|
|
*
|
2010-06-09 16:58:52 +00:00
|
|
|
* Additional copyrights may follow
|
2010-08-23 16:04:13 +00:00
|
|
|
*
|
2010-06-09 16:58:52 +00:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
2012-01-05 00:11:59 +00:00
|
|
|
/* ASSUMING local process homogeneity with respect to all utilized shared memory
|
|
|
|
* facilities. that is, if one local process deems a particular shared memory
|
|
|
|
* facility acceptable, then ALL local processes should be able to utilize that
|
|
|
|
* facility. as it stands, this is an important point because one process
|
|
|
|
* dictates to all other local processes which common sm component will be
|
|
|
|
* selected based on its own, local run-time test.
|
|
|
|
*/
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2012-01-05 00:11:59 +00:00
|
|
|
/* RML Messaging in common sm and Our Assumptions
|
|
|
|
* o MPI_Init is single threaded
|
|
|
|
* o this routine will not be called after MPI_Init.
|
|
|
|
*
|
|
|
|
* if these assumptions ever change, then we may need to add some support code
|
|
|
|
* that queues up RML messages that have arrived, but have not yet been
|
|
|
|
* consumed by the thread who is looking to complete its component
|
|
|
|
* initialization.
|
|
|
|
*/
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
#include "opal_config.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
#include "opal/align.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
#include "opal/util/argv.h"
|
2013-02-12 21:10:11 +00:00
|
|
|
#include "opal/util/show_help.h"
|
2013-01-11 16:24:56 +00:00
|
|
|
#include "opal/mca/shmem/shmem.h"
|
2010-09-22 22:05:07 +00:00
|
|
|
#if OPAL_ENABLE_FT_CR == 1
|
|
|
|
#include "opal/runtime/opal_cr.h"
|
|
|
|
#endif
|
2011-06-21 15:41:57 +00:00
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
#include "opal/constants.h"
|
|
|
|
#include "opal/mca/mpool/sm/mpool_sm.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2010-08-23 16:04:13 +00:00
|
|
|
#include "common_sm_rml.h"
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
OBJ_CLASS_INSTANCE(
|
|
|
|
mca_common_sm_module_t,
|
2011-12-16 22:43:55 +00:00
|
|
|
opal_list_item_t,
|
2011-06-21 15:41:57 +00:00
|
|
|
NULL,
|
|
|
|
NULL
|
|
|
|
);
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/* static utility functions */
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
static mca_common_sm_module_t *
|
2012-07-23 19:38:13 +00:00
|
|
|
attach_and_init(opal_shmem_ds_t *shmem_bufp,
|
|
|
|
size_t size,
|
|
|
|
size_t size_ctl_structure,
|
2012-01-11 03:37:23 +00:00
|
|
|
size_t data_seg_alignment,
|
|
|
|
bool first_call)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2011-06-21 15:41:57 +00:00
|
|
|
mca_common_sm_module_t *map = NULL;
|
|
|
|
mca_common_sm_seg_header_t *seg = NULL;
|
|
|
|
unsigned char *addr = NULL;
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* attach to the specified segment. note that at this point, the contents of
|
|
|
|
* *shmem_bufp have already been initialized via opal_shmem_segment_create.
|
|
|
|
*/
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL == (seg = (mca_common_sm_seg_header_t *)
|
2012-07-23 19:38:13 +00:00
|
|
|
opal_shmem_segment_attach(shmem_bufp))) {
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
opal_atomic_rmb();
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL == (map = OBJ_NEW(mca_common_sm_module_t))) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2012-07-23 19:38:13 +00:00
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* copy meta information into common sm module
|
|
|
|
* from ====> to */
|
|
|
|
if (OPAL_SUCCESS != opal_shmem_ds_copy(shmem_bufp, &map->shmem_ds)) {
|
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
|
|
|
free(map);
|
|
|
|
return NULL;
|
|
|
|
}
|
2011-03-08 17:36:59 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* the first entry in the file is the control structure. the first
|
|
|
|
* entry in the control structure is an mca_common_sm_seg_header_t
|
2012-07-23 19:38:13 +00:00
|
|
|
* element.
|
2011-06-21 15:41:57 +00:00
|
|
|
*/
|
|
|
|
map->module_seg = seg;
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
addr = ((unsigned char *)seg) + size_ctl_structure;
|
|
|
|
/* if we have a data segment (i.e., if 0 != data_seg_alignment),
|
|
|
|
* then make it the first aligned address after the control
|
|
|
|
* structure. IF THIS HAPPENS, THIS IS A PROGRAMMING ERROR IN
|
|
|
|
* OPEN MPI!
|
|
|
|
*/
|
|
|
|
if (0 != data_seg_alignment) {
|
|
|
|
addr = OPAL_ALIGN_PTR(addr, data_seg_alignment, unsigned char *);
|
|
|
|
/* is addr past end of the shared memory segment? */
|
2012-07-23 19:38:13 +00:00
|
|
|
if ((unsigned char *)seg + shmem_bufp->seg_size < addr) {
|
2013-02-12 21:10:11 +00:00
|
|
|
opal_show_help("help-mpi-common-sm.txt", "mmap too small", 1,
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
opal_proc_local_get()->proc_hostname,
|
2012-07-23 19:38:13 +00:00
|
|
|
(unsigned long)shmem_bufp->seg_size,
|
2011-06-21 15:41:57 +00:00
|
|
|
(unsigned long)size_ctl_structure,
|
|
|
|
(unsigned long)data_seg_alignment);
|
2012-07-23 19:38:13 +00:00
|
|
|
(void)opal_shmem_segment_detach(shmem_bufp);
|
|
|
|
free(map);
|
2011-06-21 15:41:57 +00:00
|
|
|
return NULL;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
|
|
|
}
|
2011-03-08 17:36:59 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
map->module_data_addr = addr;
|
|
|
|
map->module_seg_addr = (unsigned char *)seg;
|
2013-01-11 16:24:56 +00:00
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* note that size is only used during the first call */
|
2012-01-11 03:37:23 +00:00
|
|
|
if (first_call) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* initialize some segment information */
|
|
|
|
size_t mem_offset = map->module_data_addr -
|
|
|
|
(unsigned char *)map->module_seg;
|
|
|
|
opal_atomic_init(&map->module_seg->seg_lock, OPAL_ATOMIC_UNLOCKED);
|
|
|
|
map->module_seg->seg_inited = 0;
|
2012-01-11 03:37:23 +00:00
|
|
|
map->module_seg->seg_num_procs_inited = 0;
|
2012-07-23 19:38:13 +00:00
|
|
|
map->module_seg->seg_offset = mem_offset;
|
|
|
|
map->module_seg->seg_size = size - mem_offset;
|
2012-01-11 03:37:23 +00:00
|
|
|
opal_atomic_wmb();
|
|
|
|
}
|
2012-07-23 19:38:13 +00:00
|
|
|
|
|
|
|
/* increment the number of processes that are attached to the segment. */
|
2011-11-28 23:41:19 +00:00
|
|
|
(void)opal_atomic_add_size_t(&map->module_seg->seg_num_procs_inited, 1);
|
2012-07-23 19:38:13 +00:00
|
|
|
|
|
|
|
/* commit the changes before we return */
|
2011-06-21 15:41:57 +00:00
|
|
|
opal_atomic_wmb();
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
return map;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2013-01-11 16:24:56 +00:00
|
|
|
/* api implementation */
|
2012-07-23 19:38:13 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
mca_common_sm_module_t *
|
2013-01-11 16:24:56 +00:00
|
|
|
mca_common_sm_module_create_and_attach(size_t size,
|
|
|
|
char *file_name,
|
|
|
|
size_t size_ctl_structure,
|
|
|
|
size_t data_seg_alignment)
|
2012-07-23 19:38:13 +00:00
|
|
|
{
|
|
|
|
mca_common_sm_module_t *map = NULL;
|
|
|
|
opal_shmem_ds_t *seg_meta = NULL;
|
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
if (NULL == (seg_meta = calloc(1, sizeof(*seg_meta)))) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* out of resources */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
if (OPAL_SUCCESS == opal_shmem_segment_create(seg_meta, file_name, size)) {
|
|
|
|
map = attach_and_init(seg_meta, size, size_ctl_structure,
|
|
|
|
data_seg_alignment, true);
|
|
|
|
}
|
2012-07-23 21:11:21 +00:00
|
|
|
/* at this point, seg_meta has been copied to the newly created
|
|
|
|
* shared memory segment, so we can free it */
|
|
|
|
if (seg_meta) {
|
|
|
|
free(seg_meta);
|
|
|
|
}
|
|
|
|
|
|
|
|
return map;
|
2012-07-23 19:38:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/**
|
|
|
|
* @return a pointer to the mca_common_sm_module_t associated with seg_meta if
|
|
|
|
* everything was okay, otherwise returns NULL.
|
|
|
|
*/
|
|
|
|
mca_common_sm_module_t *
|
|
|
|
mca_common_sm_module_attach(opal_shmem_ds_t *seg_meta,
|
|
|
|
size_t size_ctl_structure,
|
|
|
|
size_t data_seg_alignment)
|
|
|
|
{
|
|
|
|
/* notice that size is 0 here. it really doesn't matter because size WILL
|
|
|
|
* NOT be used because this is an attach (first_call is false). */
|
2013-01-11 16:24:56 +00:00
|
|
|
return attach_and_init(seg_meta, 0, size_ctl_structure,
|
|
|
|
data_seg_alignment, false);
|
|
|
|
}
|
2012-07-23 19:38:13 +00:00
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
int
|
|
|
|
mca_common_sm_module_unlink(mca_common_sm_module_t *modp)
|
|
|
|
{
|
|
|
|
if (NULL == modp) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERROR;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
|
|
|
if (OPAL_SUCCESS != opal_shmem_unlink(&modp->shmem_ds)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERROR;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_SUCCESS;
|
2012-07-23 19:38:13 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2013-01-11 16:24:56 +00:00
|
|
|
int
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
mca_common_sm_local_proc_reorder(opal_proc_t **procs,
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t num_procs,
|
|
|
|
size_t *out_num_local_procs)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t num_local_procs = 0;
|
2013-01-05 01:54:23 +00:00
|
|
|
bool found_lowest = false;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
opal_proc_t *temp_proc = NULL;
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t p;
|
2012-01-05 00:11:59 +00:00
|
|
|
|
2013-01-11 16:24:56 +00:00
|
|
|
if (NULL == out_num_local_procs || NULL == procs) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_ERR_BAD_PARAM;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
/* o reorder procs array to have all the local procs at the beginning.
|
|
|
|
* o look for the local proc with the lowest name.
|
|
|
|
* o determine the number of local procs.
|
|
|
|
* o ensure that procs[0] is the lowest named process.
|
2010-08-23 16:04:13 +00:00
|
|
|
*/
|
2011-06-21 15:41:57 +00:00
|
|
|
for (p = 0; p < num_procs; ++p) {
|
|
|
|
if (OPAL_PROC_ON_LOCAL_NODE(procs[p]->proc_flags)) {
|
2010-08-23 16:04:13 +00:00
|
|
|
/* if we don't have a lowest, save the first one */
|
2011-06-21 15:41:57 +00:00
|
|
|
if (!found_lowest) {
|
2010-08-23 16:04:13 +00:00
|
|
|
procs[0] = procs[p];
|
|
|
|
found_lowest = true;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
else {
|
2010-08-23 16:04:13 +00:00
|
|
|
/* save this proc */
|
|
|
|
procs[num_local_procs] = procs[p];
|
2011-06-21 15:41:57 +00:00
|
|
|
/* if we have a new lowest, swap it with position 0
|
2013-01-11 16:24:56 +00:00
|
|
|
* so that procs[0] is always the lowest named proc */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
if( 0 > opal_compare_proc(procs[p]->proc_name, procs[0]->proc_name) ) {
|
2010-08-23 16:04:13 +00:00
|
|
|
temp_proc = procs[0];
|
|
|
|
procs[0] = procs[p];
|
|
|
|
procs[num_local_procs] = temp_proc;
|
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
/* regardless of the comparisons above, we found
|
2010-08-23 16:04:13 +00:00
|
|
|
* another proc on the local node, so increment
|
|
|
|
*/
|
|
|
|
++num_local_procs;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
}
|
2013-01-11 16:24:56 +00:00
|
|
|
*out_num_local_procs = num_local_procs;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
return OPAL_SUCCESS;
|
2013-01-11 16:24:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
mca_common_sm_module_t *
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
mca_common_sm_init(opal_proc_t **procs,
|
2013-01-11 16:24:56 +00:00
|
|
|
size_t num_procs,
|
|
|
|
size_t size,
|
|
|
|
char *file_name,
|
|
|
|
size_t size_ctl_structure,
|
|
|
|
size_t data_seg_alignment)
|
|
|
|
{
|
|
|
|
/* indicates whether or not i'm the lowest named process */
|
|
|
|
bool lowest_local_proc = false;
|
|
|
|
mca_common_sm_module_t *map = NULL;
|
|
|
|
size_t num_local_procs = 0;
|
|
|
|
opal_shmem_ds_t *seg_meta = NULL;
|
|
|
|
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
if (OPAL_SUCCESS != mca_common_sm_local_proc_reorder(procs,
|
2013-01-11 16:24:56 +00:00
|
|
|
num_procs,
|
|
|
|
&num_local_procs)) {
|
|
|
|
return NULL;
|
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
|
2010-08-23 16:04:13 +00:00
|
|
|
/* if there is less than 2 local processes, there's nothing to do. */
|
2011-06-21 15:41:57 +00:00
|
|
|
if (num_local_procs < 2) {
|
2010-08-23 16:04:13 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-07-26 15:35:01 +00:00
|
|
|
if (NULL == (seg_meta = (opal_shmem_ds_t *) malloc(sizeof(*seg_meta)))) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* out of resources - just bail */
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* determine whether or not i am the lowest local process */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
lowest_local_proc = (-1 == opal_compare_proc(OPAL_PROC_MY_NAME, procs[0]->proc_name));
|
2010-08-23 16:04:13 +00:00
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* figure out if i am the lowest rank in the group.
|
|
|
|
* if so, i will create the shared memory backing store
|
|
|
|
*/
|
|
|
|
if (lowest_local_proc) {
|
2012-07-23 19:38:13 +00:00
|
|
|
if (OPAL_SUCCESS == opal_shmem_segment_create(seg_meta, file_name,
|
2011-06-21 15:41:57 +00:00
|
|
|
size)) {
|
2012-07-23 19:38:13 +00:00
|
|
|
map = attach_and_init(seg_meta, size, size_ctl_structure,
|
|
|
|
data_seg_alignment, true);
|
|
|
|
if (NULL == map) {
|
2011-06-21 15:41:57 +00:00
|
|
|
/* fail!
|
|
|
|
* only invalidate the shmem_ds. doing so will let the rest
|
|
|
|
* of the local processes know that the lowest local rank
|
|
|
|
* failed to properly initialize the shared memory segment, so
|
|
|
|
* they should try to carry on without shared memory support
|
|
|
|
*/
|
2012-07-23 19:38:13 +00:00
|
|
|
OPAL_SHMEM_DS_INVALIDATE(seg_meta);
|
2011-06-21 15:41:57 +00:00
|
|
|
}
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* send shmem info to the rest of the local procs. */
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
if (OPAL_SUCCESS !=
|
2012-07-23 19:38:13 +00:00
|
|
|
mca_common_sm_rml_info_bcast(seg_meta, procs, num_local_procs,
|
2012-01-05 00:11:59 +00:00
|
|
|
OMPI_RML_TAG_SM_BACK_FILE_CREATED,
|
|
|
|
lowest_local_proc, file_name)) {
|
2011-06-21 15:41:57 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-07-23 19:38:13 +00:00
|
|
|
/* are we dealing with a valid shmem_ds? that is, did the lowest process
|
|
|
|
* successfully initialize the shared memory segment? */
|
|
|
|
if (OPAL_SHMEM_DS_IS_VALID(seg_meta)) {
|
2011-06-21 15:41:57 +00:00
|
|
|
if (!lowest_local_proc) {
|
2012-07-23 19:38:13 +00:00
|
|
|
/* why is size zero? see comment in mca_common_sm_module_attach */
|
|
|
|
map = attach_and_init(seg_meta, 0, size_ctl_structure,
|
|
|
|
data_seg_alignment, false);
|
2011-06-21 15:41:57 +00:00
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* wait until every other participating process has attached to the
|
|
|
|
* shared memory segment.
|
|
|
|
*/
|
|
|
|
while (num_local_procs > map->module_seg->seg_num_procs_inited) {
|
|
|
|
opal_atomic_rmb();
|
|
|
|
}
|
2012-07-23 19:38:13 +00:00
|
|
|
opal_shmem_unlink(seg_meta);
|
2011-06-21 15:41:57 +00:00
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
|
|
|
|
out:
|
2012-07-23 19:38:13 +00:00
|
|
|
if (NULL != seg_meta) {
|
|
|
|
free(seg_meta);
|
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
return map;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2010-08-23 16:04:13 +00:00
|
|
|
/**
|
2011-06-21 15:41:57 +00:00
|
|
|
* this routine is the same as mca_common_sm_mmap_init() except that
|
2010-08-23 16:04:13 +00:00
|
|
|
* it takes an (ompi_group_t *) parameter to specify the peers rather
|
2011-06-21 15:41:57 +00:00
|
|
|
* than an array of procs. unlike mca_common_sm_mmap_init(), the
|
2010-08-23 16:04:13 +00:00
|
|
|
* group must contain *only* local peers, or this function will return
|
|
|
|
* NULL and not create any shared memory segment.
|
|
|
|
*/
|
2010-06-09 16:58:52 +00:00
|
|
|
mca_common_sm_module_t *
|
|
|
|
mca_common_sm_init_group(ompi_group_t *group,
|
2010-08-23 16:04:13 +00:00
|
|
|
size_t size,
|
2010-06-09 16:58:52 +00:00
|
|
|
char *file_name,
|
2010-08-23 16:04:13 +00:00
|
|
|
size_t size_ctl_structure,
|
2010-06-09 16:58:52 +00:00
|
|
|
size_t data_seg_alignment)
|
|
|
|
{
|
2010-08-23 16:04:13 +00:00
|
|
|
mca_common_sm_module_t *ret = NULL;
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
opal_proc_t **procs = NULL, *proc;
|
|
|
|
size_t i, group_size;
|
2011-06-21 15:41:57 +00:00
|
|
|
|
|
|
|
/* if there is less than 2 procs, there's nothing to do */
|
|
|
|
if ((group_size = ompi_group_size(group)) < 2) {
|
|
|
|
goto out;
|
|
|
|
}
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
else if (NULL == (procs = (opal_proc_t **)
|
|
|
|
malloc(sizeof(opal_proc_t *) * group_size))) {
|
|
|
|
OPAL_ERROR_LOG(OPAL_ERR_OUT_OF_RESOURCE);
|
2011-06-21 15:41:57 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/* make sure that all the procs in the group are local */
|
|
|
|
for (i = 0; i < group_size; ++i) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
proc = (opal_proc_t*)ompi_group_peer_lookup(group, i);
|
2011-06-21 15:41:57 +00:00
|
|
|
if (!OPAL_PROC_ON_LOCAL_NODE(proc->proc_flags)) {
|
2010-08-23 16:04:13 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
procs[i] = proc;
|
2010-08-23 16:04:13 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
/* let mca_common_sm_init take care of the rest ... */
|
|
|
|
ret = mca_common_sm_init(procs, group_size, size, file_name,
|
|
|
|
size_ctl_structure, data_seg_alignment);
|
2010-08-23 16:04:13 +00:00
|
|
|
out:
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL != procs) {
|
2010-08-23 16:04:13 +00:00
|
|
|
free(procs);
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2010-08-23 16:04:13 +00:00
|
|
|
return ret;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
|
|
|
/**
|
|
|
|
* allocate memory from a previously allocated shared memory
|
|
|
|
* block.
|
|
|
|
*
|
|
|
|
* @param size size of request, in bytes (IN)
|
|
|
|
*
|
|
|
|
* @retval addr virtual address
|
|
|
|
*/
|
2010-06-09 16:58:52 +00:00
|
|
|
void *
|
2010-08-23 16:04:13 +00:00
|
|
|
mca_common_sm_seg_alloc(struct mca_mpool_base_module_t *mpool,
|
|
|
|
size_t *size,
|
|
|
|
mca_mpool_base_registration_t **registration)
|
2010-06-09 16:58:52 +00:00
|
|
|
{
|
2011-06-21 15:41:57 +00:00
|
|
|
mca_mpool_sm_module_t *sm_module = (mca_mpool_sm_module_t *)mpool;
|
2012-07-23 19:38:13 +00:00
|
|
|
mca_common_sm_seg_header_t *seg = sm_module->sm_common_module->module_seg;
|
2011-06-21 15:41:57 +00:00
|
|
|
void *addr;
|
|
|
|
|
|
|
|
opal_atomic_lock(&seg->seg_lock);
|
|
|
|
if (seg->seg_offset + *size > seg->seg_size) {
|
|
|
|
addr = NULL;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
size_t fixup;
|
|
|
|
|
|
|
|
/* add base address to segment offset */
|
|
|
|
addr = sm_module->sm_common_module->module_data_addr + seg->seg_offset;
|
|
|
|
seg->seg_offset += *size;
|
|
|
|
|
|
|
|
/* fix up seg_offset so next allocation is aligned on a
|
|
|
|
* sizeof(long) boundry. Do it here so that we don't have to
|
|
|
|
* check before checking remaining size in buffer
|
|
|
|
*/
|
|
|
|
if ((fixup = (seg->seg_offset & (sizeof(long) - 1))) > 0) {
|
|
|
|
seg->seg_offset += sizeof(long) - fixup;
|
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
if (NULL != registration) {
|
|
|
|
*registration = NULL;
|
|
|
|
}
|
|
|
|
opal_atomic_unlock(&seg->seg_lock);
|
|
|
|
return addr;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
|
|
|
|
2011-06-21 15:41:57 +00:00
|
|
|
/* ////////////////////////////////////////////////////////////////////////// */
|
2010-08-23 16:04:13 +00:00
|
|
|
int
|
2010-06-09 16:58:52 +00:00
|
|
|
mca_common_sm_fini(mca_common_sm_module_t *mca_common_sm_module)
|
|
|
|
{
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
int rc = OPAL_SUCCESS;
|
2011-06-21 15:41:57 +00:00
|
|
|
|
|
|
|
if (NULL != mca_common_sm_module->module_seg) {
|
|
|
|
if (OPAL_SUCCESS !=
|
|
|
|
opal_shmem_segment_detach(&mca_common_sm_module->shmem_ds)) {
|
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-)
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
|
|
|
rc = OPAL_ERROR;
|
2011-06-21 15:41:57 +00:00
|
|
|
}
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|
2011-06-21 15:41:57 +00:00
|
|
|
return rc;
|
2010-06-09 16:58:52 +00:00
|
|
|
}
|