1
1

92 строки
2.8 KiB
C
Исходник Обычный вид История

/*
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2004-2006 The University of Tennessee and The University
* of Tennessee Research Foundation. All rights
* reserved.
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
* University of Stuttgart. All rights reserved.
* Copyright (c) 2004-2005 The Regents of the University of California.
* All rights reserved.
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
* Copyright (c) 2008-2009 Cisco Systems, Inc. All rights reserved.
* $COPYRIGHT$
*
* Additional copyrights may follow
*
* $HEADER$
*/
/**
* @file
*/
#ifndef MCA_MEM_BASE_H
#define MCA_MEM_BASE_H
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal_config.h"
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal/class/opal_list.h"
#include "opal/mca/base/base.h"
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
#include "opal/mca/mpool/mpool.h"
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
BEGIN_C_DECLS
static inline unsigned int my_log2(unsigned long val) {
unsigned int count = 0;
while(val > 0) {
val = val >> 1;
count++;
}
return count > 0 ? count-1: 0;
}
static inline void *down_align_addr(void* addr, unsigned int shift) {
return (void*) (((intptr_t) addr) & (~(intptr_t) 0) << shift);
}
static inline void *up_align_addr(void*addr, unsigned int shift) {
return (void*) ((((intptr_t) addr) | ~((~(intptr_t) 0) << shift)));
}
struct mca_mpool_base_selected_module_t {
opal_list_item_t super;
mca_mpool_base_component_t *mpool_component;
mca_mpool_base_module_t *mpool_module;
void* user_data;
struct mca_mpool_base_resources_t *mpool_resources;
};
typedef struct mca_mpool_base_selected_module_t mca_mpool_base_selected_module_t;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_DECLSPEC OBJ_CLASS_DECLARATION(mca_mpool_base_selected_module_t);
/*
* Data structures for the tree of allocated memory
*/
/*
* Global functions for MCA: overall mpool open and close
*/
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_DECLSPEC int mca_mpool_base_init(bool enable_progress_threads, bool enable_mpi_threads);
OPAL_DECLSPEC mca_mpool_base_component_t* mca_mpool_base_component_lookup(const char* name);
OPAL_DECLSPEC mca_mpool_base_module_t* mca_mpool_base_module_create(
const char* name,
void* user_data,
struct mca_mpool_base_resources_t* mpool_resources);
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_DECLSPEC mca_mpool_base_module_t* mca_mpool_base_module_lookup(const char* name);
OPAL_DECLSPEC int mca_mpool_base_module_destroy(mca_mpool_base_module_t *module);
/*
* Globals
*/
extern opal_list_t mca_mpool_base_modules;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_DECLSPEC extern uint32_t mca_mpool_base_page_size;
OPAL_DECLSPEC extern uint32_t mca_mpool_base_page_size_log;
/* only used within base -- no need to DECLSPEC */
extern int mca_mpool_base_used_mem_hooks;
George did the work and deserves all the credit for it. Ralph did the merge, and deserves whatever blame results from errors in it :-) WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic. This commit was SVN r32317.
2014-07-26 00:47:28 +00:00
OPAL_DECLSPEC extern mca_base_framework_t opal_mpool_base_framework;
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php and https://svn.open-mpi.org/trac/ompi/ticket/1853, mallopt() hints do not always work -- it is possible for memory to be returned to the OS and therefore OMPI's registration cache becomes invalid. This commit removes all use of mallopt() and uses a different way to integrate ptmalloc2 than we have done in the past. In particular, we use almost exactly the same technique as MX: * Remove all uses of mallopt, to include the opal/memory mallopt component. * Name-shift all of OMPI's internal ptmalloc2 public symbols (e.g., malloc -> opal_memory_ptmalloc2_malloc). * At run-time, use the existing glibc allocator malloc hook function pointers to fully hijack the glibc allocator with our own name-shifted ptmalloc2. * Make the decision whether to hijack the glibc allocator ''at run time'' (vs. at link time, as previous ptmalloc2 integration attempts have done). Look at the OMPI_MCA_mpi_leave_pinned and OMPI_MCA_mpi_leave_pinned_pipeline environment variables and the existence of /sys/class/infiniband to determine if we should install the hooks or not. * As an added bonus, we can now tell if libopen-pal is linked statically or dynamically, and if we're linked statically, we assume that munmap intercept support doesn't work. See the opal/mca/memory/ptmalloc2/README-open-mpi.txt file for all the gory details about the implementation. Fixes trac:1853. This commit was SVN r20921. The following Trac tickets were found above: Ticket 1853 --> https://svn.open-mpi.org/trac/ompi/ticket/1853
2009-04-01 17:52:16 +00:00
END_C_DECLS
#endif /* MCA_MEM_BASE_H */