1
1
openmpi/opal/mca/mpool/base/mpool_base_lookup.c

156 строки
5.6 KiB
C
Исходник Обычный вид История

/* -*- Mode: C; c-basic-offset:4 ; -*- */
/*
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
* Copyright (c) 2004-2013 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 2006-2007 Mellanox Technologies. All rights reserved.
* Copyright (c) 2008-2014 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2012 Los Alamos National Security, LLC. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
#define OPAL_DISABLE_ENABLE_MEM_DEBUG 1
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal_config.h"
#include <stdio.h>
#include <string.h>
#include "opal/mca/mca.h"
#include "opal/mca/base/base.h"
#include "opal/util/show_help.h"
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal/util/proc.h"
#include "opal/runtime/opal_params.h"
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal/mca/mpool/mpool.h"
#include "opal/mca/mpool/base/base.h"
#include "opal/memoryhooks/memory.h"
#include "mpool_base_mem_cb.h"
mca_mpool_base_component_t* mca_mpool_base_component_lookup(const char* name)
{
/* Traverse the list of available modules; call their init functions. */
opal_list_item_t* item;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
for (item = opal_list_get_first(&opal_mpool_base_framework.framework_components);
item != opal_list_get_end(&opal_mpool_base_framework.framework_components);
item = opal_list_get_next(item)) {
- massive change for module<-->component name fixes throughout the code base. - many (most) mca type names have "component" or "module" in them, as relevant, just to further distinguish the difference between component data/actions and module data/actions. All developers are encouraged to perpetuate this convention when you create types that are specific to a framework, component, or module - did very little to entire framework (just the basics to make it compile) because it's just about to be almost entirely replaced - ditto for io / romio - did not work on elan or ib components; have to commit and then convert those on a different machine with the right libraries and headers - renamed a bunch of *_module.c files to *_component.c and *module*c to *component*c (a few still remain, e.g., ptl/ib, ptl/elan, etc.) - modified autogen/configure/build process to match new filenames (e.g., output static-components.h instead of static-modules.h) - removed DOS-style cr/lf stuff in ns/ns.h - added newline to end of file src/util/numtostr.h - removed some redundant error checking in the top-level topo functions - added a few {} here and there where people "forgot" to put them in for 1 line blocks ;-) - removed a bunch of MPI_* types from mca header files (replaced with corresponding ompi_* types) - all the ptl components had version numbers in their structs; removed - converted a few more elements in the MCA base to use the OBJ interface -- removed some old manual reference counting kruft This commit was SVN r1830.
2004-08-02 00:24:22 +00:00
mca_base_component_list_item_t *cli =
(mca_base_component_list_item_t *) item;
mca_mpool_base_component_t* component =
(mca_mpool_base_component_t *) cli->cli_component;
if(strcmp(component->mpool_version.mca_component_name, name) == 0) {
return component;
}
}
return NULL;
}
mca_mpool_base_module_t* mca_mpool_base_module_create(
const char* name,
void* user_data,
struct mca_mpool_base_resources_t* resources)
{
mca_mpool_base_component_t* component = NULL;
mca_mpool_base_module_t* module = NULL;
mca_base_component_list_item_t *cli;
mca_mpool_base_selected_module_t *sm;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_LIST_FOREACH(cli, &opal_mpool_base_framework.framework_components, mca_base_component_list_item_t) {
component = (mca_mpool_base_component_t *) cli->cli_component;
if(0 == strcmp(component->mpool_version.mca_component_name, name)) {
module = component->mpool_init(resources);
break;
}
}
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
if ( NULL == module ) {
return NULL;
}
sm = OBJ_NEW(mca_mpool_base_selected_module_t);
sm->mpool_component = component;
sm->mpool_module = module;
sm->user_data = user_data;
sm->mpool_resources = resources;
opal_list_append(&mca_mpool_base_modules, (opal_list_item_t*) sm);
/* on the very first creation of a module we init the memory
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
callback */
if (opal_list_get_size(&mca_mpool_base_modules) == 1) {
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
/* Default to not using memory hooks */
int use_mem_hooks = 0;
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
/* Use the memory hooks if leave_pinned or
leave_pinned_pipeline is enabled (note that either of these
leave_pinned variables may have been set by a user MCA
param or elsewhere in the code base). Yes, we could have
coded this more succinctly, but this is more clear. Do not
check memory hooks if the mpool explicity asked us not to. */
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
if ((opal_leave_pinned > 0 || opal_leave_pinned_pipeline) &&
!(module->flags & MCA_MPOOL_FLAGS_NO_HOOKS)) {
use_mem_hooks = 1;
}
if (use_mem_hooks) {
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
if ((OPAL_MEMORY_FREE_SUPPORT | OPAL_MEMORY_MUNMAP_SUPPORT) ==
((OPAL_MEMORY_FREE_SUPPORT | OPAL_MEMORY_MUNMAP_SUPPORT) &
opal_mem_hooks_support_level())) {
opal_mem_hooks_register_release(mca_mpool_base_mem_cb, NULL);
} else {
opal_show_help("help-mpool-base.txt", "leave pinned failed",
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
true, name, OPAL_NAME_PRINT(OPAL_PROC_MY_NAME),
opal_proc_local_get()->proc_hostname);
return NULL;
}
/* Set this to true so that mpool_base_close knows to
cleanup */
mca_mpool_base_used_mem_hooks = 1;
}
}
return module;
}
mca_mpool_base_module_t* mca_mpool_base_module_lookup(const char* name)
{
mca_mpool_base_selected_module_t *mli;
OPAL_LIST_FOREACH(mli, &mca_mpool_base_modules, mca_mpool_base_selected_module_t) {
if(0 == strcmp(mli->mpool_component->mpool_version.mca_component_name,
name)) {
return mli->mpool_module;
}
}
return NULL;
}
int mca_mpool_base_module_destroy(mca_mpool_base_module_t *module)
{
mca_mpool_base_selected_module_t *sm, *next;
OPAL_LIST_FOREACH_SAFE(sm, next, &mca_mpool_base_modules, mca_mpool_base_selected_module_t) {
if (module == sm->mpool_module) {
opal_list_remove_item(&mca_mpool_base_modules, (opal_list_item_t*)sm);
if (NULL != sm->mpool_module->mpool_finalize) {
sm->mpool_module->mpool_finalize(sm->mpool_module);
}
OBJ_RELEASE(sm);
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
return OPAL_SUCCESS;
}
}
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
return OPAL_ERR_NOT_FOUND;
}